What this article answers
Article summary
To prepare PLC logic for IEC 61508 Edition 3 systematic capability audits, engineers need behavioral evidence showing that software responds deterministically and safely under defined fault conditions. A simulation environment such as OLLA Lab can be used as a bounded verification sandbox to test safety properties, document failure handling, and harden logic before formal audit and physical validation.
Software safety under IEC 61508 is not mainly a question of whether the code looks tidy. It is a question of whether the logic can be shown to behave correctly, repeatably, and safely when the process stops being polite.
That distinction matters more in Edition 3, where the burden of proof around software systematic behavior is expected to tighten rather than relax. Hardware failure analysis still revolves around probabilistic failure measures such as PFD and PFH. Software does not fail because it aged badly in a cabinet; it fails systematically through design error, specification gaps, unintended interactions, and untested edge cases.
A recent Ampergon Vallis internal benchmark supports that point. During an internal analysis of 500 simulated safety-instrumented function test cases in OLLA Lab, 68% of initial logic drafts failed a robustness check when subjected to analog drift, stale-state input behavior, or out-of-range forcing [Methodology: n=500 simulated SIF validation tasks across pump, conveyor, HVAC, and process-skid scenarios; baseline comparator = first-pass draft before revision; time window = Jan-Feb 2026]. This supports the claim that first-pass logic often misses abnormal-state handling. It does not support any claim about industry-wide defect rates or formal compliance outcomes.
What changes in IEC 61508 Edition 3 for software safety?
The practical change is a stronger emphasis on proving Systematic Capability through reproducible evidence, not merely asserting adherence to a lifecycle.
IEC 61508 has always treated software differently from hardware because software faults are systematic rather than random. In practice, that means Edition 3 discussions center on whether the development and verification process can demonstrate that software safety requirements were translated into controlled, testable behavior. “We reviewed the code carefully” is not a useless statement, but it is no longer a sufficient one.
A second change is the increasing expectation that software assurance will be integrated with adjacent concerns such as cybersecurity, configuration control, and toolchain discipline. That does not collapse IEC 61508 into IEC 62443, but the separation is no longer as comfortable as some teams would prefer.
Edition 2 vs. Edition 3 software expectations
| Topic | Edition 2 emphasis | Edition 3 direction of travel | |---|---|---| | Software assurance | Lifecycle adherence, review discipline, structural testing | Stronger behavioral evidence, reproducible verification, machine-testable proof where feasible | | Fault handling | Often documented in narrative form | Increasing pressure for explicit abnormal-state testing and traceable outcomes | | Tool support | Helpful but not central | More important where tools improve repeatability, traceability, and test coverage | | Cybersecurity relationship | Often handled separately | More explicit interaction with secure development and system integrity concerns | | Systematic Capability evidence | Process-heavy demonstration | Process plus observable proof that logic behaves safely under defined edge cases |
The important correction is this: Edition 3 does not mean software now gets a magic formula like hardware. It means software claims are expected to be backed by stronger evidence.
What is Systematic Capability in PLC software terms?
Systematic Capability is the demonstrated ability of the development process and resulting logic to avoid, detect, and control systematic faults to the level required by the target safety function.
For PLC engineers, that definition becomes concrete when translated into observable behaviors:
- Safety logic executes in a predictable and bounded way.
- State transitions are explicit and recoverable.
- Faults drive the system to a defined safe state.
- Non-safety logic does not corrupt or delay safety behavior.
- Test evidence is reproducible and traceable to requirements.
This is where syntax versus deployability becomes a useful contrast. A rung can be syntactically valid and still be unsafe to commission.
Systematic Capability is also not a product badge. It is not conferred by using a simulator, a code generator, or an AI assistant. It is established through disciplined specification, implementation, verification, documentation, and final validation in the real assurance workflow.
What are the 16 safety properties required for Systematic Capability?
The exact grouping can vary across methodologies, but a practical set of software safety properties used in advanced functional safety work includes the following behaviors that engineers must be able to test and explain.
The 16 safety properties in operational terms
- Completeness — Every required operating mode, transition, trip path, and recovery path is defined.
- Correctness — The implemented logic matches the stated safety requirement and control philosophy.
- Consistency — Tags, states, transitions, and interlocks behave uniformly across the program.
- Determinism — The same inputs under the same conditions produce the same outputs within the required execution bounds.
- Robustness — The logic handles bad, noisy, stale, missing, or out-of-range inputs without unsafe behavior.
- Freedom from interference — Non-safety tasks, HMI actions, diagnostics, or ancillary logic do not alter safety behavior improperly.
- Traceability — Requirements, rungs, tags, tests, and results can be linked without guesswork.
- Verifiability — The code structure allows independent testing and clear pass/fail judgment.
- Maintainability — Future edits can be made without creating hidden safety regressions.
- Simplicity — The design avoids unnecessary complexity that obscures intent or increases fault risk.
- Defensiveness — The logic anticipates invalid states and handles them explicitly.
- Recoverability — After a fault, the system returns only through controlled and defined reset conditions.
- Boundedness — Timers, counters, analog scaling, and state transitions remain within known limits.
- Observability — Internal states and decision points can be inspected during verification.
- Fail-safe behavior — Loss of signal, disagreement, or invalid process state drives a safe response where required.
- Testability — Engineers can inject conditions and confirm expected outcomes without ambiguity.
The five properties PLC teams usually underestimate
- Determinism: Scan behavior must remain predictable under all relevant input combinations. - Robustness: Analog drift, chattering contacts, and stale comms values must not produce unsafe state retention. - Completeness: Every state-machine transition needs an entry condition and a safe exit condition. - Freedom from interference: Display logic, messaging, and convenience features must not disturb safety execution. - Verifiability: If the architecture cannot be tested cleanly, the audit problem starts before the site problem does.
These are engineering behaviors. If a team cannot demonstrate them under controlled test conditions, the audit discussion becomes more interpretive than it should be.
How should engineers define “Simulation-Ready” for safety-related PLC work?
“Simulation-Ready” should be defined operationally, not decoratively.
A Simulation-Ready engineer is able to prove, observe, diagnose, and harden control logic against realistic process behavior before that logic reaches a live process. That includes more than writing ladder syntax. It includes:
- mapping I/O to intended equipment behavior,
- defining what “correct” means before testing,
- forcing normal and abnormal conditions,
- tracing cause-and-effect through tags and states,
- identifying failure modes,
- revising the logic after a fault,
- and comparing simulated equipment state against ladder state.
This is the difference between drawing rungs and rehearsing commissioning judgment.
How does virtual simulation validate software determinism?
Virtual simulation validates determinism by making execution behavior observable under repeatable conditions.
In a bounded simulation environment, engineers can run logic, hold conditions constant, toggle inputs in controlled sequences, and observe whether outputs and internal states change exactly as intended. The point is repeatability.
With OLLA Lab, that verification workflow can include:
- running ladder logic in simulation mode without physical hardware,
- toggling discrete inputs and forcing analog values,
- monitoring tag state through the variables panel,
- comparing rung behavior to scenario objectives and equipment response,
- and repeating the same test after each revision.
For determinism checks, engineers should test at least these cases:
- identical input sequences repeated multiple times,
- asynchronous input changes near transition boundaries,
- timer-dependent transitions,
- reset and restart behavior,
- loss and restoration of permissives,
- analog threshold crossings with noise or drift.
A common misconception is that simulation only proves basic functionality. Used properly, it can also show whether the logic has stable behavioral boundaries.
How can OLLA Lab be used as a bounded verification sandbox?
OLLA Lab should be positioned as a risk-contained verification sandbox, not as a certification engine.
Its operational value is straightforward: engineers can build ladder logic in a web-based editor, run it in simulation, inspect variables and I/O behavior, and validate logic against scenario-based machine models and digital twins before physical commissioning. That makes it useful for pre-audit hardening, fault rehearsal, and evidence capture.
Within that bounded role, OLLA Lab supports several relevant verification tasks:
- Ladder Logic Editor: build and revise control logic using standard instruction types, including timers, counters, comparators, math, logic, and PID. - Simulation Mode: execute logic safely, stop and rerun tests, and force input conditions without hardware exposure. - Variables Panel and I/O Visibility: inspect tags, outputs, analog values, and loop behavior while tracing cause-and-effect. - 3D/WebXR/VR scenarios: compare ladder behavior against machine or process response in realistic operating contexts. - Digital twin validation: test whether the intended sequence actually behaves correctly against a virtual equipment model. - Scenario-based commissioning practice: rehearse interlocks, alarms, proof feedbacks, trips, permissives, and reset logic. - GeniAI lab guide: provide guided support and ladder assistance during learning and test preparation.
That last point needs a boundary. AI assistance can accelerate drafting and explanation. It does not replace deterministic review, independent verification, or safety judgment.
What does digital twin validation mean in a functional safety workflow?
Digital twin validation means testing control logic against a virtual representation of equipment or process behavior to confirm that the logic’s decisions produce the intended system response.
In safety-related work, that means asking questions such as:
- Does a trip condition force the expected safe state?
- Does a proof feedback timeout behave correctly?
- Does a manual reset remain blocked until all permissives are healthy?
- Does analog failure handling prevent false restart or hidden unsafe continuation?
- Does the sequence recover cleanly after an abnormal stop?
This is where OLLA Lab becomes operationally useful. The platform’s scenario structure, I/O visibility, and digital twin framing allow engineers to test behavior rather than merely inspect syntax.
That said, digital twin validation is not a substitute for final site acceptance, device validation, or certified safety lifecycle activities. It is a pre-commissioning evidence layer.
What fault cases should engineers test before a Systematic Capability audit?
Engineers should test the fault cases that expose hidden assumptions in the logic, especially where state retention, permissives, and analog interpretation can fail silently.
A useful pre-audit fault set includes:
- Sensor out-of-range: low, high, NaN-equivalent, or implausible values - Analog drift: gradual movement across alarm and trip thresholds - Chattering discrete input: repeated transition noise on limit switches or feedbacks - Stale-state input: value frozen while process conditions should be changing - Loss of permissive: motor starter feedback lost, valve proof absent, pressure not established - Power-cycle or restart condition: retained bits and startup state validation - Manual reset misuse: reset available before hazard is cleared - Sequence interruption: stop or trip during mid-step transition - Communication dropout surrogate: frozen or invalid status from a dependent subsystem - Interlock disagreement: command issued while feedback contradicts expected equipment state
These tests matter because many dangerous failures are not dramatic. They are quiet mismatches between what the ladder believes and what the equipment is actually doing.
What does an audit-ready engineering evidence package look like?
An audit-ready package should document engineering reasoning and behavioral proof, not just screenshots.
Use this compact structure for each safety-relevant scenario or function:
Capture the engineering insight: hidden assumption, missing permissive, ambiguous reset path, timing issue, or interference risk.
- System Description Define the equipment, process purpose, operating mode, and safety role.
- Operational definition of “correct” State the exact expected behavior, including permissives, trips, reset conditions, timing, and safe state.
- Ladder logic and simulated equipment state Show the relevant rungs, tag mapping, and the equipment or process state used in simulation.
- The injected fault case Document the abnormal condition introduced, how it was forced, and why it matters.
- The revision made Record the logic change, parameter adjustment, or state-handling correction made after the test.
- Lessons learned
This structure is deliberately plain. Auditors and reviewers usually prefer evidence they can follow without interpretive archaeology.
How can engineers generate audit-ready evidence using OLLA Lab?
Engineers can use OLLA Lab to generate reproducible pre-audit artifacts by tying each test to a scenario, a set of forced conditions, observable tag behavior, and a documented revision.
A practical workflow looks like this:
- Select a scenario with explicit operating objectives For example, an E-Stop chain, lead/lag pump control, conveyor sequence, or AHU permissive set.
- Define the expected safe behavior before testing State what must happen on trip, on reset, and on abnormal input.
- Run the ladder in simulation mode Use the editor and simulation controls to execute the logic under normal conditions first.
- Force the fault through the variables panel Inject out-of-range analog values, remove proof feedback, toggle interlocks, or simulate stale states.
- Observe and record the response Confirm whether outputs, states, alarms, and reset paths behave as defined.
- Revise the logic and rerun the exact case This is the important part. Evidence without revision history is often just a diary.
- Capture the scenario parameters and result summary Preserve the test conditions so another reviewer can reproduce the result.
In that workflow, OLLA Lab’s value is not that it proves compliance on its own. Its value is that it helps engineers create a repeatable body of behavioral evidence before formal audit submission and before live equipment becomes the test bench.
What does a defensive E-Stop rung look like in ladder logic?
A defensive E-Stop implementation should enforce fail-safe loss behavior, explicit manual reset, and protection against tied-down or premature restart conditions.
[Language: Ladder Diagram - IEC 61131-3]
|----[/] E_STOP_OK ----+-------------------------------( ) SAFE_TRIP_ACTIVE | | |----[/] SAFETY_RELAY_FB-+
|----[ ] E_STOP_OK ----[ ] SAFETY_RELAY_FB ----[ ] RESET_PB ----[/] SAFE_TRIP_ACTIVE ----[TON ANTI_TIEDOWN 500ms] |------------------------------------------------------------------------------------------------------------( ) RESET_PERMISSIVE
|----[ ] RESET_PERMISSIVE ----[ ] ALL_PERMISSIVES_OK ----[ ] NO_ACTIVE_FAULTS ----[/] START_INHIBIT ----( ) SAFETY_ENABLE
|----[/] SAFETY_ENABLE ------------------------------------------------------------------------------------( ) MOTOR_RUN_CMD
Why this pattern matters
- Completeness: restart requires defined healthy conditions, not just E-Stop restoration. - Robustness: loss of safety relay feedback or E-Stop health forces trip behavior. - Recoverability: reset is manual and conditioned. - Fail-safe behavior: absence of healthy safety inputs removes enable. - Freedom from interference: the safety path is explicit and separable from convenience logic.
In practice, the exact implementation depends on platform, safety architecture, and certified hardware path. The point here is structural: safe recovery should be earned, not assumed.
How do 3D and VR simulations help with software safety evidence?
3D and VR simulations help when they improve observability of process consequence, not when they merely add visual theater.
In OLLA Lab, 3D/WebXR/VR scenarios can help engineers compare ladder state against visible equipment response. That is useful when testing:
- sequence progression,
- actuator timing,
- proof feedback dependencies,
- alarm conditions,
- interlocked movement,
- and operator-reset consequences.
The engineering benefit is that logic errors become easier to spot when the virtual equipment does something obviously wrong for a traceable reason.
That said, the evidence remains software-side and simulation-bounded. It strengthens pre-commissioning verification. It does not replace physical validation, certified device behavior, or the formal safety case.
How should teams use AI assistance without weakening safety rigor?
Teams should use AI assistance for acceleration at the draft and explanation layer, then apply deterministic human review at the decision layer.
In OLLA Lab, GeniAI can help with onboarding, rung explanation, corrective suggestions, and ladder drafting support. That is useful, especially for structured learning and early-stage iteration. It reduces friction, but friction reduction is not the same thing as safety assurance.
For safety-related logic, teams should require:
- explicit requirement mapping,
- independent review of generated logic,
- fault-injected simulation,
- documented revision after failed cases,
- and final approval by a qualified engineer within the project’s safety lifecycle.
The memorable contrast is simple: draft generation versus deterministic veto. The second one is the job.
What should engineers do next if they are preparing for Edition 3 audits?
Engineers should start by converting abstract safety claims into repeatable test cases.
A practical sequence is:
- identify the safety-relevant functions in the PLC scope,
- define correct behavior for normal, trip, reset, and abnormal states,
- map each function to a small set of safety properties,
- run fault-injected simulation before hardware testing,
- document revisions in a compact evidence package,
- and reserve live commissioning for validation, not first discovery.
If your current workflow still treats abnormal-state testing as something that happens once the panel is energized, the process is late.
_Image alt-text: Screenshot of the OLLA Lab Variables Panel demonstrating a Systematic Capability test. An analog input is forced out-of-range, and the logic transitions to a safe state, illustrating the robustness property in a simulated IEC 61508 audit workflow._
Keep exploring
Related Links
Related reading
Explore the Pillar 1 hub →Related reading
Related article 1 →Related reading
Related article 2 →Related reading
Related article 3 →Related reading
Book an OLLA Lab implementation walkthrough →References
- IEC 61131-3: Programmable controllers — Part 3: Programming languages - IEC 61508 overview (functional safety) - NIST AI Risk Management Framework (AI RMF 1.0) - Digital Twin in Manufacturing: A Categorical Literature Review and Classification (IFAC, DOI) - Digital Twin in Industry: State-of-the-Art (IEEE, DOI)