What this article answers
Article summary
To align with 2026 collaborative application standards, OEMs must validate the entire robotic application, not just the robot arm. In practice, that means testing PLC safety logic, zone sensing, stopping behavior, and workspace interactions against a digital twin that shows whether the intended safety sequence matches physical machine behavior.
A collaborative robot is not automatically a safe collaborative application. That distinction is central to the 2026 conversation, and it is often where weak safety assumptions begin.
A recent Ampergon Vallis validation run in a palletizing scenario showed that a simulated LiDAR zone breach at 1.6 m/s required an additional 140 ms of deceleration allowance in the control sequence to avoid a virtual collision. [Methodology: 12 repeated runs of one palletizer digital-twin task, compared against the same logic without added deceleration allowance, observed during Q1 2026.] This supports one narrow point: timing margins that look acceptable in static logic review can fail once motion and inertia are modeled. It does not support any broad claim about all robot cells, all payloads, or formal compliance.
The practical issue is simple. Safety logic that looks correct in a ladder editor can still be late in a real workspace.
What is the difference between a safe robot and a safe collaborative application?
A safe robot and a safe collaborative application are not the same thing. Under the ISO 10218 framework and related collaborative guidance, safety is assessed at the application level, not granted by the manipulator alone.
That means the safety case must include the whole working system:
- the robot manipulator,
- the end-effector or tooling,
- the workpiece or payload,
- the workspace layout,
- the sensing architecture,
- and the control logic that governs interaction states.
This matters because a robot arm marketed for collaborative use can become hazardous once it carries a sharp gripper, a welding torch, a heavy carton, or a rigid sheet-metal part. Internal force limiting does not neutralize every application hazard.
The three application elements that change the safety case
The application-level risk picture changes materially when these elements are added:
- Manipulator: Reach, speed, stopping behavior, axis motion, and control interfaces. - End-effector/tooling: Pinch points, sharp edges, thermal hazards, stored energy, vacuum loss, or gripping failure. - Workpiece/payload: Mass, geometry, inertia, drop risk, and secondary impact risk.
ISO/TS 15066 is commonly used as guidance for collaborative operation, particularly around contact limits and application assessment, while ISO 10218-1 and ISO 10218-2 define the broader robot and integration framework. The key engineering implication is stable: the integrator must validate the application behavior in context, not merely inherit the robot vendor's marketing language.
What are the four collaborative operation modes defined by ISO standards?
The four collaborative operation modes are Safety-Rated Monitored Stop, Hand Guiding, Speed and Separation Monitoring, and Power and Force Limiting. These are the standard reference modes used when designing collaborative robot applications.
For controls engineers, the important distinction is that these are not just labels. They imply different sensing architectures, different control behaviors, and different validation burdens.
1. Safety-Rated Monitored Stop (SMS)
Safety-Rated Monitored Stop means the robot stops when a human enters the collaborative space, while motion restart is controlled and conditional.
Typical control implications include:
- safety input from scanner, gate, or zone device,
- safe stop command path,
- reset and restart permissives,
- proof that motion remains inhibited while personnel are present.
2. Hand Guiding (HG)
Hand Guiding allows an operator to directly guide robot motion using a dedicated enabling arrangement and constrained operating conditions.
Typical control implications include:
- enabling device validation,
- limited operating mode selection,
- restricted speed or force behavior,
- supervised transition in and out of hand-guiding mode.
3. Speed and Separation Monitoring (SSM)
Speed and Separation Monitoring means robot speed is dynamically controlled so that a minimum protective separation distance is maintained between the robot system and the human.
Typical control implications include:
- area scanner or vision-based zone inputs,
- speed reduction states,
- safe stop states when separation is violated,
- dynamic transitions between normal, reduced, and stopped motion.
4. Power and Force Limiting (PFL)
Power and Force Limiting means the application is designed so that contact, if it occurs, remains within acceptable biomechanical limits under defined conditions.
Typical control implications include:
- validated force or torque limits,
- payload and tooling constraints,
- speed limitations,
- application-specific injury-risk assessment.
PFL is often misunderstood as "the robot is safe to touch." That is too broad to be useful. The real question is whether the application under defined operating conditions remains within acceptable limits.
How do you program Speed and Separation Monitoring logic?
Programming SSM logic requires more than mapping a scanner bit to a stop coil. The logic must account for human approach, robot speed, response time, stopping distance, and the transition rules between warning, reduced-speed, and stop states.
A common protective separation framing is:
S = (v_h × T_r) + (v_r × T_r) + C
Where:
- S = protective separation distance
- v_h = human approach speed
- v_r = robot approach speed
- T_r = total system response time
- C = additional intrusion or measurement compensation factor
The exact implementation method depends on the sensing architecture and the applicable risk assessment, but the engineering principle is stable: if response time is underestimated, the separation distance is not reliable.
What should ladder logic do in an SSM application?
At minimum, the ladder logic should manage these state transitions:
- Normal operation: Full-speed motion permitted when no protective zone is breached. - Warning zone entered: Command reduced speed and verify the robot acknowledges the reduced-speed state. - Protective zone entered: Trigger the required safe stop function and inhibit hazardous motion. - Zone clear: Hold restart conditions until reset, acknowledgement, or procedural permissives are satisfied. - Fault state: Default to a safe state if scanner health, communications, or safety input validity is lost.
Example ladder logic pattern for a safety zone breach
|---[ LiDAR_Healthy ]---[ Safety_Zone_Breach ]-----------------(TON Debounce_150ms)---| |---[ Debounce_150ms.DN ]--------------------------------------(CMD_Safe_Stop_Cat1)---| |---[ Debounce_150ms.DN ]--------------------------------------(INH_Motion_Enable)----| |---[/ LiDAR_Healthy ]-----------------------------------------(CMD_Safe_Stop_Cat0)---| |---[ Warning_Zone_Breach ]---[/ Safety_Zone_Breach ]---------(CMD_Reduced_Speed)----|
This pattern is intentionally simple. In a real design, the stop category, diagnostic coverage, reset behavior, and safety architecture must align with the risk assessment and the safety-rated subsystem design.
The debounce timer deserves a brief comment. It is there to reduce nuisance trips from noisy zone transitions, not to delay a dangerous signal path without justification. Safety filtering has to be justified.
How should engineers handle muting logic?
Muting logic must distinguish expected material movement from human intrusion without weakening the protective function. That usually means:
- defining the specific conveyor or infeed condition that permits muting,
- limiting muting to a bounded time and direction,
- proving that human entry still produces the required protective response,
- alarming or faulting on abnormal muting persistence.
Why are digital twins required for 2026 safety validation?
Digital twins are required in practice because static logic review cannot prove motion safety behavior under realistic fault conditions. For collaborative applications, the relevant question is not only "what does the PLC intend?" but "what still happens physically before the machine reaches a safe state?"
In this article, digital twin validation means: binding PLC ladder logic to a kinematics-enabled 3D model to observe the delta between the intended safety sequence and the physical deceleration behavior during a fault state.
That is an operational definition.
What digital twin validation can show that static review often misses
A properly configured simulation can expose:
- deceleration lag after a stop command,
- payload-dependent stopping differences,
- zone breach timing errors,
- scanner dropout behavior,
- race conditions between speed reduction and stop commands,
- restart permissive errors,
- mismatch between ladder state and physical equipment state.
This is where OLLA Lab becomes operationally useful.
OLLA Lab is best understood here as a bounded validation and rehearsal environment. Engineers can build ladder logic in the browser, run it in simulation mode, inspect I/O and variables, and observe the effect on 3D or WebXR equipment models representing realistic industrial scenarios. In that workflow, the product is not a compliance generator and not a substitute for formal safety assessment. It is a place to induce abnormal conditions safely and repeatedly before physical commissioning becomes more expensive.
Why physical-only testing is a poor first pass
Physical testing of high-speed zone breaches is limited by obvious constraints:
- it exposes personnel and equipment to avoidable risk,
- it is difficult to repeat with identical timing,
- it can degrade hardware,
- it encourages teams to test only "reasonable" cases,
- and it often happens too late, after mechanical and schedule commitments are already fixed.
What "Simulation-Ready" means in this context
Simulation-Ready does not mean familiar with PLC syntax or comfortable in a 3D viewer. It means an engineer can prove, observe, diagnose, and harden control logic against realistic process behavior before it reaches a live process.
Observable behaviors of a Simulation-Ready engineer include:
- defining what "correct" means before testing,
- tracing I/O changes through ladder state and machine response,
- injecting faults deliberately,
- comparing commanded state to simulated equipment state,
- revising logic after abnormal behavior,
- and documenting why the revision improves the control outcome.
How can OEMs use OLLA Lab to validate collaborative application logic safely?
OEMs can use OLLA Lab as a risk-contained sandbox for rehearsing high-risk logic behaviors that are difficult, expensive, or unsafe to test first on physical hardware.
Within the product's documented scope, that includes:
- building ladder logic in a web-based editor,
- running and stopping logic in simulation mode,
- toggling inputs and observing outputs,
- monitoring variables, analog values, and PID-related states,
- validating logic against 3D or WebXR machine models,
- and working through scenario-based sequences, hazards, interlocks, and commissioning notes.
For collaborative applications, that supports a practical validation workflow such as:
- Build the safety-related state logic for warning, reduced-speed, stop, reset, and fault conditions.
- Bind logic behavior to a machine scenario that includes motion and workspace interaction.
- Inject scanner breaches, communications loss, payload changes, or restart edge cases.
- Observe whether the simulated machine state matches the intended safety sequence.
- Revise timing, permissives, or fault handling.
- Preserve the evidence trail.
The value is not that simulation replaces site testing. The value is that it removes avoidable ignorance before site testing begins.
How can OEMs build a compliance decision package using simulation?
Simulation should contribute to a compliance decision package as engineering evidence, not as a decorative appendix. Auditors and safety reviewers are persuaded by traceable reasoning, bounded test evidence, and revision history, not by a folder full of screenshots.
Use this compact evidence structure:
1) System Description
Document:
- cell purpose,
- robot task,
- tooling and payload,
- sensing devices,
- safety functions,
- operating modes,
- and the intended human interaction boundary.
2) Operational definition of "correct"
Define observable pass criteria such as:
- reduced-speed command occurs within the warning-zone condition,
- hazardous motion is inhibited on protective-zone breach,
- restart requires reset and all permissives true,
- scanner health loss forces the system to a safe state,
- simulated stopping behavior remains inside the protected envelope.
If "correct" is not defined in observable terms, the test is not very useful.
3) Ladder logic and simulated equipment state
Preserve:
- the ladder version tested,
- the variable and I/O state at each transition,
- the corresponding machine motion or stop behavior in the digital twin,
- and any relevant analog or timing values.
This is the core comparison: ladder state versus equipment state.
4) The injected fault case
State exactly what was injected, for example:
- warning-zone breach during full-speed motion,
- protective-zone breach during maximum payload travel,
- scanner communications loss,
- false-clear transition,
- restart request with incomplete permissives,
- or muting active during unexpected human entry.
5) The revision made
Document the actual engineering change:
- debounce adjustment,
- stop-category change,
- revised speed-reduction threshold,
- added health-check permissive,
- reset-sequence correction,
- or altered interlock structure.
6) Lessons learned
Capture what the test revealed, such as:
- response-time assumptions were optimistic,
- payload inertia changed the safe timing margin,
- scanner health needed explicit fault handling,
- or state transitions were logically valid but physically late.
That body of evidence is generally more credible than a polished dashboard with no test logic behind it.
What standards and literature matter when validating collaborative applications?
The standards baseline should be explicit. Collaborative applications sit at the intersection of robot safety, functional safety, and application-specific risk assessment.
Commonly relevant references include:
- ISO 10218-1 / ISO 10218-2 for industrial robot and integration safety requirements.
- ISO/TS 15066 for collaborative robot application guidance.
- IEC 61508 for the broader functional safety framework of electrical, electronic, and programmable systems.
- Technical guidance from organizations such as exida and recognized machine-safety practitioners for implementation interpretation.
- Peer-reviewed literature on digital twins, cyber-physical validation, and industrial simulation from sources such as IFAC-PapersOnLine, Sensors, and related manufacturing systems journals.
A caution is worth stating plainly: no simulator, including OLLA Lab, grants compliance by association. Compliance depends on the complete application design, risk assessment, implemented safety architecture, validation record, and final installed conditions.
What should OEM teams do next?
OEM teams should stop asking whether the robot is collaborative and start asking whether the application behavior is demonstrably safe under faulted conditions.
The practical sequence is:
- define the collaborative mode,
- identify the protective functions,
- model the stopping and separation behavior,
- validate ladder logic against realistic machine motion,
- inject abnormal states before site commissioning,
- and preserve a traceable evidence package.
That is the difference between a plausible design and a defensible one.
Keep exploring
Related Reading
Related reading
How To Validate Iso 10218 1 2025 Robot Safety Interlocks In Ladder Logic →Related reading
How To Program Amr Dynamic Safety Zones In A Plc →Related reading
How To Program An Automated Mixer State Machine In Ladder Logic →Related reading
Explore the Industrial PLC Programming hub →Related reading
Related article: Theme 3 Article 1 →Related reading
Related article: Theme 3 Article 2 →Related reading
Run this workflow in OLLA Lab ↗