What this article answers
Article summary
To protect PLC logic from intrusion under IEC 62443-4-2, engineers should implement component-level access control, communication-integrity checks, and deterministic safe-state behavior inside the control logic itself. OLLA Lab provides a bounded simulation environment to rehearse lockouts, heartbeat-loss detection, and intrusion-response validation before those behaviors are trusted near live equipment.
Perimeter security is necessary, but it is not the last line of defense. If a threat actor reaches the control network, the PLC is no longer just executing process logic; it is deciding whether unsafe commands become physical motion.
IEC 62443-4-2 matters here because it shifts part of the security burden into the component itself. That includes identification, authentication strength, communication integrity, and access to audit-relevant events at the device level, not merely at the firewall. In practice, that means the ladder logic should reject impossible or unauthorized state changes, detect loss of trusted supervision, and force the process into a defined safe condition.
Ampergon Vallis Metric: In 24 OLLA Lab red-team simulations of forced unauthorized state changes against a pump-and-valve permissive model, 24 of 24 attempts were trapped by a heartbeat-loss interlock plus explicit run permissives before the simulated motor entered a commanded run state [Methodology: n=24 simulated intrusion trials on one security-permissive training scenario, baseline comparator = same scenario without heartbeat-loss interlock and lockout logic, observed during March 2026]. This supports the value of logic-level fallback controls in that scenario. It does not establish general breach-rate reduction across all PLC platforms, architectures, or plants. A simulator is useful for evidence, not for mythology.
Why is logic-level security required by IEC 62443?
Logic-level security is required because IEC 62443 does not treat the PLC as a passive endpoint. IEC 62443-4-2 defines component security requirements for embedded and host devices, including controls related to identification and authentication, communication integrity, and audit-relevant behavior.
The practical shift is simple: a PLC must not assume that every command arriving from a trusted network path is legitimate. That assumption was always optimistic. It is merely less defensible now.
Relevant requirements commonly cited in this context include:
- CR 1.1 — Human user identification and authentication: the component should support identification and authentication of human users. - CR 1.7 — Strength of password-based authentication: password mechanisms must meet minimum strength expectations. - CR 3.1 — Communication integrity: the component should protect the integrity of communications or detect integrity failures. - CR 6.1 — Audit log accessibility: security-relevant events must be available for review and investigation.
These requirements do not mean every security function is implemented directly in ladder logic. Some belong in firmware, controller configuration, HMI design, or surrounding architecture. The engineering point is narrower and more useful: where process safety or equipment protection depends on command validity, the control program must enforce deterministic permissives and abnormal-state behavior even after upstream trust has failed.
A common misconception is that cybersecurity and control logic are separate disciplines. They are not separate once a bad command can start a motor against a closed valve, defeat a sequence, or hold an output in a state that the process should never tolerate. At that point, a network issue can become mechanical damage rather quickly.
CISA advisories on industrial products repeatedly surface weaknesses such as improper access control (CWE-284) and cleartext transmission of sensitive information (CWE-319) in legacy environments. Those advisories do not imply that ladder logic alone solves the problem. They do reinforce a harder truth: if credentials, sessions, or command paths are weak, the controller program should be written to distrust unsafe state transitions.
How should engineers define “Simulation-Ready” for PLC security validation?
“Simulation-Ready” should be defined as the ability to prove, observe, diagnose, and harden control logic against realistic process behavior before deployment to a live process. It is not a synonym for “can write ladder syntax.”
Operationally, a Simulation-Ready engineer can:
- define what the process is allowed to do,
- encode those limits as permissives, trips, and lockouts,
- inject abnormal conditions,
- observe tag behavior and equipment response,
- revise the logic after failure,
- and document why the revised behavior is more correct.
That distinction matters because syntax versus deployability is where many training environments stop too early. A rung that compiles is not yet a control strategy. A rung that survives bad inputs, loss of supervision, and contradictory field states is closer.
In this article, OLLA Lab is positioned within that bounded role. It is a web-based ladder logic and simulation environment where engineers can rehearse high-risk validation tasks safely: monitoring I/O, forcing abnormal values, tracing cause and effect, comparing ladder state to simulated equipment state, and revising logic after a fault. It is not a substitute for site acceptance, formal compliance, or demonstrated competence on a live plant.
How do you program password protection and access permissives in ladder logic?
Password protection in ladder logic should be treated as a bounded access-control mechanism, not as a complete identity platform. The useful pattern is to verify an HMI-entered value, count failed attempts, and latch a lockout state that blocks privileged commands until a supervised reset condition is met.
A compact implementation can be built from standard instructions:
- `EQU` to compare entered and stored values
- `CTU` to count failed attempts
- latched coil / memory bit to hold lockout state
- permissive contacts to block protected actions
- supervised reset condition to clear lockout
Core logic pattern
Objective: Permit an administrative or maintenance action only when the entered PIN matches the stored value and the system is not in lockout.
Suggested tags:
- `HMI_PIN_Entry` : integer entered from HMI - `Stored_Admin_PIN` : integer constant or secured value - `PIN_Submit_Pulse` : one-shot from HMI submit action - `PIN_Match` : internal bit - `Failed_Attempt_CTU` : counter - `System_Lockout_Alarm` : latched bit - `Admin_Access_Granted` : internal bit - `Lockout_Reset_Request` : supervised reset command
Example ladder logic flow
Rung 1: Evaluate submitted PIN
ladder | PIN_Submit_Pulse |----[EQU HMI_PIN_Entry Stored_Admin_PIN]----( PIN_Match )
Rung 2: Count failed attempts
ladder | PIN_Submit_Pulse |----[/PIN_Match]----------------------------[CTU Failed_Attempt_CTU PRE 3]
Rung 3: Latch lockout when failed attempts reach preset
ladder | Failed_Attempt_CTU.ACC >= 3 |---------------------------------(L) System_Lockout_Alarm
Rung 4: Grant access only if match is true and no lockout exists
ladder | PIN_Submit_Pulse |----[PIN_Match]----[/System_Lockout_Alarm]--( Admin_Access_Granted )
Rung 5: Reset lockout only under supervised conditions
ladder | Lockout_Reset_Request |----[Supervisor_Mode]------------------(U) System_Lockout_Alarm | Lockout_Reset_Request |----[Supervisor_Mode]------------------[RES Failed_Attempt_CTU]
The engineering purpose is not elegant credential management. It is deterministic control over privileged actions. If the HMI is being brute-forced, the PLC should stop accepting guesses after a defined threshold and should require an explicit supervised recovery path.
What this logic does well
- blocks repeated trial-and-error attempts,
- creates a visible lockout state,
- prevents protected commands from executing after repeated failures,
- provides a clear event for operator or maintenance review.
What this logic does not do
- it does not replace secure user management in the HMI or controller platform,
- it does not encrypt credentials,
- it does not satisfy every authentication requirement of IEC 62443 by itself,
- it does not prove that the surrounding architecture is secure.
That boundary matters. A counter is not an identity system.
How should access permissives be structured so unsafe commands are rejected?
Access permissives should be structured around process validity first, user privilege second. A valid user command should still fail if the process state makes the command unsafe.
For example, a pump start command should not energize the run output merely because an authenticated HMI user requested it. The rung should also require:
- discharge path available,
- suction or level condition acceptable,
- no active lockout,
- no trip active,
- heartbeat healthy,
- mode and sequence state valid,
- proof feedbacks consistent with expected pre-start state.
A compact permissive model looks like this:
ladder | Start_Command | |----[Admin_Access_Granted OR Operator_Run_Permitted] |----[/System_Lockout_Alarm] |----[HMI_Heartbeat_Healthy] |----[Discharge_Valve_Open_Proof] |----[/Pump_Trip_Active] |----[Auto_Sequence_Ready] |----------------------------------------------------( Pump_Run_Permissive )
Then the output rung should consume `Pump_Run_Permissive`, not the raw command.
That separation is important. Command intent is not command authority. In secure control logic, the command asks; the permissive decides.
What is a heartbeat monitor and how does it detect intrusion?
A heartbeat monitor is a logic pattern that confirms continued communication from a trusted supervisory source by requiring a periodic bit transition within a defined time window. If the bit stops changing, the PLC treats that as loss of supervision and removes run authority or drives the process to a safe state.
This is one practical way to support the intent behind communication-integrity requirements. It does not prove that the sender is benevolent, but it does detect one common failure mode: the expected HMI or supervisory session has disappeared, stalled, or been displaced.
Why heartbeat logic matters
If a legitimate HMI is expected to toggle a bit every second and that bit stops changing, several possibilities exist:
- the HMI has failed,
- communications have been interrupted,
- the supervisory session has frozen,
- a rogue device has replaced or bypassed the expected path,
- or the process is no longer under the control assumptions the PLC was designed to trust.
The controller should react to that loss deterministically. Waiting politely is rarely a control strategy.
Example heartbeat design using `TON`
Suggested tags:
- `HMI_Heartbeat_Bit` : toggled by HMI - `Last_Heartbeat_State` : stored previous state - `Heartbeat_Change_Pulse` : one-shot when state changes - `Heartbeat_Timer` : `TON` - `HMI_Heartbeat_Healthy` : internal bit - `System_Run_Permissive` : internal bit used elsewhere
Logic concept
Rung 1: Detect heartbeat change
ladder | HMI_Heartbeat_Bit XOR Last_Heartbeat_State |------------------( Heartbeat_Change_Pulse )
Rung 2: Update stored state on change
ladder | Heartbeat_Change_Pulse |--------------------------------------( Last_Heartbeat_State := HMI_Heartbeat_Bit )
Rung 3: Reset timer when heartbeat changes; time out if no change occurs
ladder | /Heartbeat_Change_Pulse |-------------------------------------[TON Heartbeat_Timer PRE 2000ms]
Rung 4: Declare heartbeat healthy only when timer is not done
ladder | /Heartbeat_Timer.DN |-----------------------------------------( HMI_Heartbeat_Healthy )
Rung 5: Remove run permissive on heartbeat loss
ladder | Existing_Process_Permissives |----[HMI_Heartbeat_Healthy]-----( System_Run_Permissive )
The exact implementation varies by PLC family and instruction set. The principle does not: if the trusted supervisory source stops behaving like the trusted supervisory source, the PLC should degrade safely.
Choosing the timeout window
The timeout should be based on:
- expected HMI update rate,
- network determinism,
- process criticality,
- nuisance-trip tolerance,
- and the safe-state consequences of a false positive.
A 500 ms timeout may be appropriate in some fast supervisory contexts. A 2000 ms timeout may be more stable in others. The correct number is the one justified by the process and tested under realistic scan and communications behavior, not the one that looks stern in a diagram.
How can you simulate a brute-force attack in OLLA Lab?
You can simulate a brute-force attack in OLLA Lab by forcing repeated invalid credential values through the Variables Panel, observing the counter and lockout bits in simulation, and confirming that the digital twin or simulated equipment remains in a safe state despite continued run requests.
This is where OLLA Lab becomes operationally useful. Practicing this on a live process would be a poor way to preserve uptime.
3 steps to test defensive logic
#### 1. Inject anomalous data
Use the Variables Panel to manipulate:
- `HMI_PIN_Entry`
- `PIN_Submit_Pulse`
- any related command bits such as `Start_Command`
Enter incorrect integer values repeatedly and pulse the submit bit as though an HMI session were being brute-forced.
What to observe:
- whether `PIN_Match` remains false,
- whether the `Failed_Attempt_CTU.ACC` increments once per attempt,
- whether one-shots behave correctly and do not overcount due to scan behavior.
#### 2. Verify lockout execution
Continue invalid submissions until the counter reaches its preset.
Expected result:
- `Failed_Attempt_CTU.ACC` reaches `3`,
- `System_Lockout_Alarm` energizes and latches,
- `Admin_Access_Granted` remains false,
- protected commands no longer create downstream permissives.
This validation is important because counters and latches are easy to draw and surprisingly easy to mis-handle. Scan-cycle details are where confidence gets corrected.
#### 3. Validate safe state in the simulation
Use OLLA Lab’s simulation environment to confirm that equipment state follows the lockout logic, not the hostile command.
For a pump-and-valve example, verify that:
- the motor output remains de-energized,
- the valve does not transition into an unsafe sequence,
- `Pump_Run_Permissive` drops out,
- subsequent run commands remain blocked until supervised reset.
If a 3D or digital twin view is available for the selected scenario, compare the ladder state against the simulated equipment state directly. That is the useful definition of digital twin validation here: the virtual equipment should visibly obey the defensive control logic under abnormal conditions.
How do you simulate heartbeat loss and unauthorized state changes in OLLA Lab?
You simulate heartbeat loss by stopping or freezing the heartbeat bit updates, then observing whether the timer expires and the process transitions to the intended safe state. You simulate unauthorized state changes by forcing command or status tags to values that should be rejected by the permissive model.
Heartbeat-loss test procedure
5. Verify that:
- `Heartbeat_Timer.DN` becomes true,
- `HMI_Heartbeat_Healthy` drops false,
- `System_Run_Permissive` drops out,
- the simulated equipment transitions to the defined safe state.
- Run the scenario normally with a healthy toggling heartbeat.
- Confirm `HMI_Heartbeat_Healthy` is true and `System_Run_Permissive` can be established under valid process conditions.
- Freeze `HMI_Heartbeat_Bit` in the Variables Panel.
- Observe the `TON` accumulator until the timeout expires.
Unauthorized state-change test procedure
Force a command that should be impossible under current process conditions. For example:
- command pump run while discharge valve proof is false,
- command sequence advance while a prior step is incomplete,
- force a maintenance-only command while `System_Lockout_Alarm` is active.
Expected behavior:
- the raw command bit may change,
- the permissive should remain false,
- the output should not energize,
- and any alarm or diagnostic bit should indicate why the action was rejected.
That distinction is worth preserving in documentation: an unauthorized state change is not merely a changed tag value; it is a changed tag value that fails to earn process authority.
What should engineers document as evidence of defensive PLC skill?
Engineers should document a compact body of engineering evidence, not a screenshot gallery. The point is to show reasoning, validation discipline, and revision under fault conditions.
Use this structure:
Define the process unit, control objective, operating modes, and protected actions. Example: centrifugal pump with discharge valve proof, HMI-issued start command, and maintenance lockout path.
State the exact expected behavior. Example: the pump may run only when valve proof is true, no lockout is active, heartbeat is healthy, and no trip exists; on heartbeat loss, run permissive must drop within 2 seconds.
Record the abnormal test: brute-force PIN attempts, frozen heartbeat bit, forced run command against failed permissives, contradictory proof feedback, and so on.
Show what changed after the failed or incomplete first attempt. Example: added one-shot conditioning to prevent overcounting, latched lockout alarm, or separated raw command from run permissive.
- System Description
- Operational definition of “correct”
- Ladder logic and simulated equipment state Include the relevant rungs, tag list, and the corresponding equipment behavior in simulation. The ladder and the machine model should tell the same story.
- The injected fault case
- The revision made
- Lessons learned State what the test revealed about scan behavior, permissive design, fault handling, or operator recovery.
This is the kind of evidence that demonstrates Simulation-Ready thinking. It shows that the engineer can move from logic drafting to validation and correction. Reviewers generally find that more useful than a folder full of screenshots.
How does OLLA Lab support bounded cybersecurity rehearsal without overstating the claim?
OLLA Lab supports bounded cybersecurity rehearsal by giving engineers a web-based environment to build ladder logic, run simulation, inspect variables and I/O behavior, and compare logic state against simulated equipment behavior under abnormal conditions.
Within the scope of this article, that includes:
- building lockout and permissive logic in the browser-based ladder editor,
- using Simulation Mode to execute and debug the program without physical hardware,
- manipulating tags through the Variables Panel to emulate hostile or invalid inputs,
- and, where the scenario supports it, using 3D or digital twin views to confirm that equipment behavior follows the defensive logic.
That is a credible use case because these are exactly the tasks that are risky, inconvenient, or expensive to rehearse on live systems. It is also where training value becomes concrete: not “learn cybersecurity” in the abstract, but watch the permissive fail closed when the heartbeat dies and the command still insists.
The boundary is equally important. OLLA Lab does not certify compliance with IEC 62443, replace vendor hardening, or validate a production architecture by itself. It is a risk-contained sandbox for rehearsing logic-level defensive behavior before that behavior is trusted near real equipment.
What are the main design mistakes when adding security logic to PLC programs?
The main design mistakes are usually architectural, not syntactic. Engineers often add a security feature as an isolated rung and forget to integrate it into the actual authority path.
Common mistakes include:
A counter that never gates a permissive is merely decorative.
- Counting failed attempts without blocking protected actions
Security checks must sit in the authority chain, not in a side branch no output actually consumes.
- Using raw commands directly in output logic
If supervision disappears and the process continues by default, the heartbeat logic is theater.
- Failing open on heartbeat loss
Automatic or unsupervised reset defeats the purpose of the lockout.
- Resetting lockout too easily
Submit pulses, one-shots, and counter increments can misbehave if edge detection is sloppy.
- Ignoring scan-cycle behavior
A valid user can still issue an invalid command for the current process state.
- Treating HMI authentication as sufficient process validation
“Safe” should mean something observable: de-energized motor, closed valve, sequence halted, alarm latched, restart blocked until supervised recovery.
- Not defining the safe state explicitly
Security logic fails in familiar ways. It is still logic.
Conclusion
Protecting PLC logic from intrusion under IEC 62443 means moving part of the defense into the control program itself. The core behaviors are straightforward: authenticate where appropriate, count and lock out repeated failures, monitor supervision integrity, reject impossible commands, and force a deterministic safe state when trust conditions collapse.
The practical value of simulation is that it lets engineers test those behaviors before a live process provides the feedback in a more expensive dialect. A Simulation-Ready workflow is not about drawing a cleaner rung. It is about proving that the rung still does the correct thing when the surrounding assumptions stop cooperating.
Related Reading and Next Step
- Zero-Trust OT: Why Implicit Trust Is Now a Liability on the Floor - The Cybersecurity-First PLC Programmer: IEC 62443 Implementation
- Return to the Ladder Logic Mastery Hub
- Open the Security and Permissives Preset in OLLA Lab
Continue Learning
- Up (Pillar Hub): Explore Pillar guidance - Across: Related article 1 - Across: Related article 2 - Down (Commercial/CTA): Build your next project in OLLA Lab