AI Industrial Automation

Article playbook

How to Handle PLC Vendor Extensions: UDT vs. USER_DEFINED in IEC 61131-3

IEC 61131-3 standardizes PLC languages, not full cross-vendor runtime behavior. This article explains how UDTs, DUTs, memory layout, and validation practices affect migration and commissioning risk.

Direct answer

IEC 61131-3 standardizes PLC programming languages, not full cross-vendor runtime behavior. Complex data structures such as UDTs, DUTs, and vendor-specific user-defined types remain implementation-dependent, especially in memory layout and block access. OLLA Lab provides a bounded simulation environment for practicing data mapping, tag structuring, and logic validation before vendor-specific deployment.

What this article answers

Article summary

IEC 61131-3 standardizes PLC programming languages, not full cross-vendor runtime behavior. Complex data structures such as UDTs, DUTs, and vendor-specific user-defined types remain implementation-dependent, especially in memory layout and block access. OLLA Lab provides a bounded simulation environment for practicing data mapping, tag structuring, and logic validation before vendor-specific deployment.

IEC 61131-3 compliance is not the same thing as code portability. The standard defines language families and core semantics, but it leaves important runtime and memory details implementation-dependent, which is exactly where cross-platform migrations tend to fail.

A practical correction helps here: most failed migrations are not caused by the rung that starts a motor. They are caused by the structure wrapped around it. In an internal Ampergon Vallis benchmark, 68% of simulated cross-platform migration failures were associated with nested user-defined data structure mismatches, padding conflicts, or address-model assumptions rather than core ladder syntax errors [Methodology: n=512 simulated migration tasks involving multi-tag structures and I/O remapping, baseline comparator = syntax-only ladder acceptance, time window = Jan 2025 to Feb 2026]. This supports one narrow claim: data modeling is a primary failure point in migration rehearsal. It does not prove a universal industry rate.

That distinction matters because syntax is easy to admire and hard to deploy. Memory layout is the quieter problem, and it is usually the one waiting at commissioning.

Why is IEC 61131-3 compliance not enough for code portability?

IEC 61131-3 standardizes programming languages, not identical vendor behavior. It defines languages such as Ladder Diagram (LD), Function Block Diagram (FBD), Structured Text (ST), Sequential Function Chart (SFC), and related type concepts, but it permits implementation-dependent behavior in areas that affect execution, storage, and interoperability.

In practice, “implementation-dependent” is not a footnote. It is the place where vendors make different decisions about:

  • data alignment and padding,
  • byte ordering and storage conventions,
  • optimized versus absolute memory access,
  • compiler treatment of structures and nested types,
  • retentive behavior,
  • library-specific function block implementations.

This is why two controllers can both be described as IEC 61131-3 compliant and still disagree on how a complex structure is stored or addressed.

A useful engineering definition follows from that: portable logic is not logic that compiles in two places; it is logic whose data assumptions, execution assumptions, and interface assumptions survive both compilers. That is a higher bar than most marketing language implies.

What does the standard actually leave open?

The standard leaves room for vendor-specific implementation in several areas relevant to data structures and interoperability. Depending on edition and vendor documentation, this commonly includes:

  • internal representation of data types,
  • structure packing and alignment,
  • access methods for variables and blocks,
  • tasking and scan behavior details,
  • library and runtime extensions.

The result is straightforward. A type declaration can look standard while the underlying memory behavior is not.

Why does this matter on a live system?

It matters because external interfaces do not negotiate with your assumptions. A Modbus register map, an OPC UA client, an HMI faceplate, or a peer PLC exchange expects a stable interpretation of data. If one side pads a `BOOL` field or reorders a structure for optimized access, the logic may still compile while the process data shifts underneath it.

That is the sort of error that survives a code review and appears during startup.

What are the key differences between UDT and USER_DEFINED types across PLC vendors?

The key difference is not only naming. It is how each vendor binds custom types to memory, access rules, and tooling behavior.

Different ecosystems use different terms for broadly similar ideas:

Vendor terminology breakdown

- Rockwell Automation (Studio 5000):

  • Uses User-Defined Data Types (UDTs).
  • UDTs are central to Logix tag modeling.
  • Memory behavior and member alignment follow vendor-specific compiler rules.
  • Integrators often encounter alignment assumptions when exchanging packed data with non-Rockwell systems.

- Siemens (TIA Portal):

  • Uses PLC data types and UDTs in common engineering language.
  • “Optimized block access” can change how data is arranged internally.
  • This improves efficiency inside the Siemens environment but can break workflows that depend on fixed absolute offsets.
  • If an external system expects old-style fixed addresses, optimized access is not helpful.

- Codesys-based platforms, including Beckhoff and WAGO in many implementations:

  • Commonly use DUTs (Data Unit Types) declared with `TYPE ... END_TYPE`.
  • The syntax is standardized in style, but runtime packing and target behavior still depend on the platform and compiler.
  • Cross-target portability remains conditional, not automatic.

- Other vendor environments:

  • May use terms such as `STRUCT`, `USER_DEFINED`, custom record types, or platform-specific object models.
  • The naming difference is less important than the resulting storage and access behavior.

What is the operational distinction engineers should care about?

The operational distinction is this: a user-defined type is not just a naming convenience; it is a contract about data shape, member order, and access expectations. If two systems disagree about that contract, the logic around the data becomes unreliable even when the ladder itself is perfectly ordinary.

This is where engineers often confuse language compatibility with deployability. The first is textual. The second is physical.

How does memory padding break standardized logic?

Memory padding breaks standardized logic by shifting the expected location of fields inside a structure. The logic may remain syntactically valid, but any interface that assumes a different byte or word layout can read the wrong value.

Consider this simplified declaration:

TYPE Motor_Control_DUT : STRUCT Start_Cmd : BOOL; ( may be padded depending on vendor ) Speed_Ref : REAL; ( 32 bits ) Fault_Code : INT; ( 16 bits ) END_STRUCT END_TYPE

This declaration appears simple. It is not universally simple in memory.

One compiler may place `Start_Cmd` into a packed bit location and place `Speed_Ref` immediately after the next valid boundary. Another may align the `REAL` on a 32-bit boundary and insert padding after the `BOOL`. A third may optimize the structure inside a block in a way that makes absolute offsets unsafe for external consumers.

A concrete failure mode

A common failure mode appears in register-based communications.

  • A sending controller exposes a structure to a Modbus TCP map.
  • The engineer assumes `Start_Cmd`, `Speed_Ref`, and `Fault_Code` occupy consecutive expected offsets.
  • The receiving controller imports or reconstructs the same conceptual structure using its own compiler rules.
  • The `REAL` lands at a different offset because the first platform padded the `BOOL` field.
  • The receiving side now reads corrupted speed reference data or interprets part of the floating-point value as a fault code.

The ladder can be “correct” and the machine can still behave incorrectly. That is the practical consequence of data misalignment.

Why nested types make this worse

Nested structures multiply the risk because each child structure can introduce its own alignment behavior. A simple motor object may be manageable. A process skid object containing commands, permissives, alarms, analog values, PID parameters, and status words becomes much more fragile.

The failure pattern is predictable:

  • one incorrect assumption at the parent structure level,
  • one hidden alignment rule in a child structure,
  • one external mapping built on absolute offsets,
  • one long commissioning day.

What are the practical differences between Rockwell UDTs, Siemens block models, and Codesys DUTs?

The practical differences appear in how engineers define, access, and exchange structured data.

Rockwell Automation

Rockwell UDTs are heavily used for reusable equipment models, faceplates, and AOI-adjacent tag organization. In practice, engineers often design around Logix tag structures that are clean inside the Rockwell ecosystem but require deliberate remapping when exposed to third-party systems.

Practical implications include:

  • strong internal consistency for Logix projects,
  • frequent use in motor, valve, and device object patterns,
  • care required when mapping to external protocols or non-Rockwell consumers,
  • alignment and packing assumptions that should be verified, not guessed.

Siemens

Siemens introduces an important distinction between standard-style access and optimized block access. Optimized access can improve memory use and internal performance, but it reduces address transparency for external systems that expect fixed offsets.

Practical implications include:

  • efficient internal handling of structured data,
  • reduced reliability of old absolute-address assumptions,
  • need to decide explicitly whether external interoperability or internal optimization takes priority,
  • extra caution when integrating HMIs, historians, or peer PLCs expecting stable addresses.

Codesys and related platforms

Codesys-style DUTs offer a familiar and flexible type declaration model. They are powerful for structured engineering, reusable libraries, and machine abstraction. They are not, however, a guarantee that another target will store the same structure identically.

Practical implications include:

  • clear type declaration syntax,
  • portability within bounded platform assumptions,
  • target-specific runtime differences still relevant,
  • need for explicit verification when crossing vendor boundaries.

Why does “lift and shift” PLC migration usually fail?

“Lift and shift” PLC migration usually fails because industrial control software is coupled to hardware behavior, memory models, I/O conventions, scan assumptions, and vendor tooling. The logic is only one layer of the system.

A migration normally requires engineers to reconcile at least five things:

- Type systems: UDT, DUT, struct, and block conventions differ. - Addressing models: absolute, symbolic, optimized, and protocol-exposed layouts differ. - Instruction behavior: timers, counters, PID blocks, and library functions are not always semantically identical. - I/O binding: field devices, scaling, and diagnostic bits are platform-specific. - Commissioning assumptions: startup sequencing, fault reset behavior, and permissive handling are often encoded in vendor-native patterns.

So the honest version is this: there is no industrial “copy-paste migration” worth trusting on a live process. There is only translation, verification, and risk reduction.

How should engineers define “Simulation-Ready” for cross-platform PLC work?

Simulation-Ready should be defined operationally, not aspirationally. A Simulation-Ready engineer can prove, observe, diagnose, and harden control logic against realistic process behavior before that logic reaches a live process.

For cross-platform data structure work, that means the engineer can:

  • define structured tags and nested members clearly,
  • trace cause-and-effect between ladder state and equipment state,
  • verify expected behavior under normal and abnormal conditions,
  • identify where data assumptions depend on vendor-specific packing or access rules,
  • revise the logic or mapping after an injected fault,
  • document what “correct” means before deployment.

That is different from merely being able to write a rung. Syntax is necessary. It is not the finish line.

This framing aligns with broader engineering literature on simulation-based validation, digital twin use in industrial systems, and pre-deployment verification as a risk-reduction measure in complex automation environments (for example, Fuller et al., 2020; Jones et al., 2020; and IEC 61508 principles on systematic risk reduction).

How can OLLA Lab help engineers practice vendor-agnostic data mapping?

OLLA Lab helps by giving engineers a bounded environment to rehearse structured tag design, simulation, and fault-aware validation before moving into vendor-specific IDE constraints. It is not a universal code translator, and it should not be treated as one.

Its value here is narrower and more credible: it lets engineers practice the engineering habit that migration work actually requires.

What OLLA Lab is doing in this workflow

Within the product’s documented scope, OLLA Lab provides:

  • a web-based ladder logic editor,
  • simulation mode for running and stopping logic,
  • a Variables Panel for monitoring and adjusting tags, I/O, analog values, and related states,
  • scenario-based industrial exercises,
  • digital-twin-style simulation contexts for validating behavior against modeled equipment.

For this article’s use case, the Variables Panel matters most because it allows engineers to define and inspect structured variables in a hardware-agnostic environment before they confront vendor-specific compilation and mapping rules.

What “vendor-agnostic” means here

Vendor-agnostic does not mean vendor-free deployment. It means the practice environment is not forcing the student to learn Rockwell, Siemens, and Codesys memory rules all at once while also learning causality, sequencing, and tag architecture.

That separation is useful because beginners and junior engineers often fail for two reasons at once:

  • they do not yet understand the control behavior,
  • and they are already buried in platform-specific details.

How do you use the OLLA Lab Variables Panel to rehearse UDT-style mapping?

The workflow is to model the control behavior first, then model the data shape, then validate causality under simulation.

### Step 1: Define the raw control logic

Build the ladder logic in the editor using the required instruction set for the scenario:

  • contacts and coils,
  • timers and counters,
  • comparators and math blocks,
  • analog and PID elements where relevant.

At this stage, focus on sequence and causality. A motor start permissive chain should behave correctly before you worry about how a specific vendor will pad a child structure.

### Step 2: Build the structured tags in the Variables Panel

Use the Variables Panel to create a nested tag model that reflects the equipment or process object. For example:

  • `Motor_Status.Running`
  • `Motor_Status.Faulted`
  • `Motor_Command.Start`
  • `Motor_Command.Stop`
  • `Motor_Analog.Speed_Ref`
  • `Motor_Alarm.Overload`

This is where OLLA Lab becomes operationally useful. The engineer can practice naming discipline, grouping logic-related members, and observing how rung state maps to equipment state.

### Step 3: Simulate and observe state changes

Run the simulation and toggle inputs while watching outputs and variables.

Check for:

  • expected transitions,
  • failed permissives,
  • alarm behavior,
  • analog response,
  • sequence timing,
  • mismatch between intended state and actual tag behavior.

A good simulation session answers a plain question: when the process changes, does the logic change for the right reason?

### Step 4: Inject an abnormal condition

Introduce a fault case such as:

  • failed proof feedback,
  • analog high-high trip,
  • permissive loss during run,
  • delayed start confirmation,
  • stale command state.

The purpose is to verify that the structure and logic still make sense when the process stops cooperating.

### Step 5: Revise the logic and document the mapping assumptions

Adjust the ladder, tag grouping, or state handling after the fault appears. Then record which assumptions are portable and which will need vendor-specific treatment later.

That final step is the difference between practice and evidence.

What engineering evidence should a junior engineer build instead of a screenshot gallery?

A junior engineer should build a compact body of engineering evidence that demonstrates reasoning, validation, and revision. Screenshots are supporting artifacts. They are not the argument.

Use this structure:

State what correct behavior means in observable terms. Example: “Pump starts only when lead enable, low suction trip clear, and remote start command are true; faults latch until reset.”

  1. System Description Define the machine or process object, its purpose, major states, and key I/O.
  2. Operational definition of “correct”
  3. Ladder logic and simulated equipment state Show the relevant rungs and the corresponding simulated state changes in tags, outputs, or modeled equipment.
  4. The injected fault case Describe the abnormal condition introduced and why it matters operationally.
  5. The revision made Explain what changed in the logic, tag structure, or sequence handling after the fault was observed.
  6. Lessons learned Record the engineering conclusion, especially where data modeling, sequencing, or interface assumptions created risk.

This format is useful in training because it demonstrates commissioning judgment, not just diagram literacy. Employers do not need more screenshots of green bits. They need evidence that the engineer can think when the process disagrees.

How does this connect to digital twin validation and commissioning risk?

Digital twin validation is useful when it is defined as behavior checking against a realistic machine or process model before deployment. It is not useful when it is treated as decorative 3D scenery attached to untested logic.

In a bounded training environment such as OLLA Lab, digital-twin-style simulation helps engineers compare:

  • ladder state,
  • I/O state,
  • analog behavior,
  • sequence progression,
  • and modeled equipment response.

That comparison matters because commissioning failures are often relational. The rung looks fine in isolation. The process sequence does not.

Research across industrial digital twins and simulation-based engineering has consistently supported the value of virtual validation for reducing late-stage integration risk, improving system understanding, and supporting operator or engineer training when properly scoped (Fuller et al., 2020; Tao et al., 2019). Functional safety guidance also reinforces the broader principle that systematic faults should be found as early as possible through disciplined design and verification rather than discovered on the plant floor (IEC 61508; exida, 2024).

The field translation is simple: if a fault can be found before startup, that is the correct time to find it.

What should engineers do before moving from OLLA Lab to a vendor-specific PLC environment?

Engineers should treat OLLA Lab as a rehearsal environment, then perform explicit vendor-specific translation and verification before deployment.

Use this handoff checklist:

  • confirm the target platform’s type system and naming conventions,
  • verify structure packing and alignment behavior,
  • decide whether external systems require fixed addressing,
  • review optimized versus standard block access settings,
  • map protocol-exposed data explicitly,
  • validate analog scaling and engineering units,
  • compare timer, counter, and PID behavior against target semantics,
  • run fault cases again in the vendor environment,
  • document any assumptions that changed during translation.

This is the disciplined path from simulation to deployment. It is slower than wishful thinking and faster than a bad startup.

Keep exploring

Related Reading

References

Editorial transparency

This blog post was written by a human, with all core structure, content, and original ideas created by the author. However, this post includes text refined with the assistance of ChatGPT and Gemini. AI support was used exclusively for correcting grammar and syntax, and for translating the original English text into Spanish, French, Estonian, Chinese, Russian, Portuguese, German, and Italian. The final content was critically reviewed, edited, and validated by the author, who retains full responsibility for its accuracy.

About the Author:PhD. Jose NERI, Lead Engineer at Ampergon Vallis

Fact-Check: Technical validity confirmed on 2026-03-23 by the Ampergon Vallis Lab QA Team.

Ready for implementation

Use simulation-backed workflows to turn these insights into measurable plant outcomes.

© 2026 Ampergon Vallis. All rights reserved.
|