What this article answers
Article summary
Flow totalizer errors in PLCs usually come from two different math failures: integer truncation and single-precision floating-point precision loss. INT tags discard fractional flow, while 32-bit REAL tags can eventually stop registering small increments against a large accumulated total. Reliable totalization requires data-type discipline, rollover design, and simulation-based validation.
A flow totalizer can be wrong even when the transmitter, pump, and pipework are all behaving correctly. The failure is often inside the PLC’s arithmetic model, not the process. That distinction matters because bad math is quieter than bad hardware.
During a simulated 24-hour pump run in OLLA Lab, testing a 16-bit INT totalizer against a repeated 0.4-gallon increment, the accumulator recorded 0 gallons while the simulated process moved 576 gallons. Methodology: sample size = 1 controlled simulation task using repeated fixed increments; baseline comparator = expected arithmetic accumulation of 0.4 gallons over 1,440 minutes; time window = 24 simulated hours. This supports one narrow point: integer truncation can produce complete loss of fractionalized flow in a deterministic test case. It does not establish a universal field failure rate.
This is where “syntax versus deployability” becomes real. A rung can look correct, compile cleanly, and still mislead operations for weeks.
What Causes Truncation Errors in 16-Bit Integer Math?
Truncation errors occur when a PLC stores or processes fractional flow using an integer data type that cannot represent decimals. If the incoming increment is 0.8 and the destination is an INT, the fractional part is discarded before it ever becomes inventory.
In IEC 61131-3 environments, this behavior is normal data-type behavior. The mistake is assuming the process will forgive it.
The limits of 16-bit signed integers
A signed 16-bit integer (`INT`) has a finite range:
- Minimum: `-32,768` - Maximum: `32,767`
If a totalizer accumulates pulse counts or scaled volume directly into an `INT`, two failure modes appear quickly:
- Overflow: once the value exceeds `32,767`, it rolls into the negative range or faults, depending on platform behavior and instruction handling. - Fractional deletion: any value below 1.0 is truncated when cast or written into an integer destination.
For a pulse-per-unit application, overflow can happen surprisingly fast. For an analog-derived incremental totalizer, truncation can happen on every scan. One is dramatic; the other is often harder to notice.
Why integer totalizers silently delete real flow
Integer math does not “round a little.” It removes the remainder. If your logic computes:
- `Flow_Increment = 0.8 gallons per scan`
- `Total_INT = Total_INT + Flow_Increment`
then the effective addition becomes:
- `Total_INT = Total_INT + 0`
The process moved fluid. The PLC recorded nothing.
This is a common design error when engineers scale a 4–20 mA flow signal into engineering units, divide by a time base, and then write the result into an integer accumulator. The rung may be syntactically valid, but the totalizer is already compromised.
Why scan rate makes the problem worse
Fast scan cycles increase the chance that each incremental volume is small. That means more additions fall below 1.0 engineering unit and are lost if the destination is integer-based.
A high-resolution totalizer therefore requires more than an ADD block. It requires alignment between:
- signal scaling,
- scan interval,
- engineering units,
- accumulator data type.
Why Do 32-Bit REAL Totalizers Stop Counting Over Time?
A 32-bit REAL solves the fraction problem, but it introduces a different failure: precision loss at large accumulated values. Once the total becomes large enough, small incoming increments no longer change the stored number.
This is an IEEE 754 behavior, not necessarily a software defect. It is how single-precision floating-point works.
The floating-point precision limit
Most PLC `REAL` types are IEEE 754 single-precision floating-point values. In practical engineering terms, they provide about 7 significant decimal digits of precision.
That means the size of the smallest representable change depends on the magnitude of the number already stored.
Examples:
- Near `10.0`, adding `0.01` is usually representable.
- Near `1,000,000.0`, adding `0.01` may be too small to change the stored value.
- Near larger totals, even modest increments can be swallowed.
The totalizer does not fail because the process stopped. It fails because the accumulator’s numeric resolution has become coarser than the increment being added.
What the “swallowing” effect looks like
The classic symptom is simple:
- the flow transmitter indicates active flow,
- the pump is running,
- the process is physically moving product,
- but the SCADA totalizer flatlines.
At that point, operators often suspect communications, historian lag, or instrumentation drift. Sometimes the problem is much less glamorous: the accumulator has run out of useful granularity.
A `REAL` can represent large numbers or small increments well enough for many tasks. It cannot do both indefinitely in a growing totalizer without design controls.
Why this matters in batching, utilities, and custody-adjacent reporting
Not every totalizer is financially critical, but many are operationally consequential. Errors in accumulated flow can distort:
- batch yield calculations,
- chemical dosing records,
- water balance reporting,
- CIP usage estimates,
- tank inventory reconciliation,
- maintenance decisions tied to throughput.
This article does not make a custody-transfer compliance claim. It makes a narrower engineering claim: if the accumulator architecture is weak, the reported volume can diverge materially from physical reality.
Which PLC Data Type Should You Use for a Flow Totalizer?
The correct answer depends on what you are accumulating: pulses, scaled engineering units, or fractionalized time-step increments. There is no single universal tag choice, but there are defensible patterns.
Use DINT for whole-count accumulation where possible
If the source is a pulse stream and each pulse represents a fixed quantity, a `DINT` is usually safer than an `INT`.
Why:
- A 32-bit signed `DINT` ranges from `-2,147,483,648` to `2,147,483,647`
- It dramatically delays overflow relative to `INT`
- It preserves exact whole-number counts
For pulse totalization, counting integers as integers is usually the cleanest design.
Use REAL carefully for fractional working accumulation
If the source increment is fractional, a `REAL` can be useful as a working accumulator, not always as the only lifetime totalizer.
Good use cases:
- accumulating short-window fractional flow,
- holding a subtotal before rollover,
- supporting operator-visible daily or batch totals with bounded reset intervals.
Risky use case:
- letting a single 32-bit REAL grow indefinitely while adding very small increments.
That is where precision erosion becomes a design problem rather than a theoretical one.
Use LREAL if your platform supports it and the application justifies it
A 64-bit `LREAL` offers far greater precision and range than a 32-bit `REAL`. On platforms that support it reliably across controller, HMI, historian, and interface layers, it is often a cleaner solution for long-lived fractional totalization.
But “supports it” must mean end-to-end support:
- controller instruction behavior,
- tag transport,
- SCADA/HMI compatibility,
- historian storage type,
- reporting layer interpretation.
A mathematically sound controller tag is not enough if the rest of the stack quietly downcasts it.
How Do You Program a Cascaded Rollover Totalizer?
A cascaded rollover totalizer separates fractional accumulation from long-term storage. This pattern is often more robust than keeping one ever-growing floating-point total.
The design principle is simple:
- accumulate small increments in a fractional-capable register,
- transfer larger chunks into a long-range integer total,
- retain only the remainder in the fractional register.
This reduces the chance that tiny additions disappear against a very large floating-point number.
Example logic pattern
Step 1: Accumulate raw flow increment into a REAL working total.
`ADD Flow_Increment, Working_Total_Real, Working_Total_Real`
Step 2: Check whether the working total has reached a transfer threshold.
`CMP Working_Total_Real >= 100.0`
Step 3: Move the threshold amount into a long-range integer master total.
`ADD 100, Master_Total_DINT, Master_Total_DINT`
`SUB Working_Total_Real, 100.0, Working_Total_Real`
Why this pattern works
The engineering benefit is numeric stability.
A cascaded design gives you:
- fraction retention in the working register,
- long-range storage in the integer master total,
- reduced floating-point precision loss because the REAL subtotal stays relatively small,
- clear auditability of how the total is constructed.
You can also extend the pattern with:
- batch totals,
- daily reset registers,
- nonvolatile retained totals,
- alarm checks for rollover anomalies,
- sequence interlocks that prevent updates during invalid instrument states.
What “correct” means for a totalizer design
A totalizer is not “correct” because the rung compiles or the HMI number changes. It is correct when the logic satisfies an operational definition such as:
- accumulated volume matches expected arithmetic within the defined tolerance,
- overflow behavior is prevented or explicitly handled,
- invalid input states do not create false accumulation,
- reset behavior is controlled and auditable,
- long-run precision remains fit for the reporting purpose.
That is the standard that matters in commissioning.
How Does OLLA Lab Reveal Data Type Failures Before Commissioning?
OLLA Lab is useful here as a bounded validation environment, not as an oracle. Its value is that engineers can observe scan-by-scan behavior, manipulate inputs safely, and compare ladder state against simulated process behavior before a live system inherits the mistake.
In practical terms, that means you can test whether the totalizer math behaves correctly under realistic operating conditions rather than trusting a visually tidy rung.
What OLLA Lab makes observable
Using the Ladder Logic Editor, Simulation Mode, and Variables Panel, a user can:
- build a totalizer using `INT`, `DINT`, `REAL`, or mixed-type logic,
- inject fixed or varying flow increments,
- monitor accumulator values in real time,
- compare input behavior against output math,
- accelerate simulation to expose long-run precision problems faster.
That is operationally useful because many of these failures are slow in the field. In simulation, they become inspectable.
Operational definition of “Simulation-Ready”
In this context, Simulation-Ready means an engineer can:
- prove the intended control behavior,
- observe the effect of each input and state transition,
- diagnose numeric and sequencing faults,
- harden the logic against realistic process behavior,
- document why the revised logic is more reliable before it reaches a live process.
It does not mean site competence, certification, or automatic readiness for unsupervised commissioning. Simulation is rehearsal, not legal absolution.
A practical OLLA Lab validation workflow
A useful validation sequence in OLLA Lab would be:
- Create a simulated flow source with known incremental behavior.
- Build one totalizer using `INT` and another using `REAL`.
- Run both under identical increments.
- Observe truncation in the integer path.
- Increase the REAL accumulator until small increments stop changing the total.
- Replace the design with a cascaded rollover or higher-precision strategy.
- Re-run the scenario and compare the results.
This is where OLLA Lab becomes operationally useful. It gives visibility into a class of failures that often survive desk review and only become obvious after inventory reconciliation becomes difficult.
How Should Engineers Document Totalizer Validation as Real Engineering Evidence?
A screenshot of a rung is not engineering evidence. It is only illustrative unless tied to behavior, fault injection, and revision history.
If you want to demonstrate serious control work, use a compact evidence package with six parts:
Document the exact failure introduced: integer truncation, overflow, floating-point swallowing, bad scaling, or reset race condition.
Explain the design change: `DINT` migration, `LREAL` adoption, cascaded rollover, threshold transfer logic, or gated accumulation.
- System Description Define the process context, signal source, units, scan assumptions, and totalization objective.
- Operational definition of “correct” State the expected arithmetic behavior, tolerance, reset rules, and invalid-state handling.
- Ladder logic and simulated equipment state Show the logic and the corresponding simulated process behavior together, not in isolation.
- The injected fault case
- The revision made
- Lessons learned Capture what the test proved, what assumptions failed, and what should become a design standard.
That structure is closer to commissioning evidence than to a simple portfolio snapshot.
What Standards and Literature Support This Design Approach?
The underlying data-type behavior is grounded in standard industrial programming and numerical computing principles, not in platform folklore.
Relevant anchors include:
- IEC 61131-3 for PLC programming languages and data type conventions used across industrial control systems.
- IEEE 754 for floating-point arithmetic behavior, including finite precision and representational limits.
- IEC 61508 for the broader principle that systematic design errors in programmable systems should be identified and controlled through disciplined engineering processes.
- Simulation and digital-twin literature in industrial automation, which generally supports the use of modeled environments to validate control behavior before deployment, especially where live testing is costly or risky.
This article does not claim that a simulator alone establishes compliance, safety integrity, or field acceptance. It makes a narrower claim: simulation improves observability of deterministic logic failures that are otherwise expensive to detect late.
Conclusion
Flow totalizer errors are often caused by poor data-type choices. `INT` tags delete fractions, `REAL` tags can eventually lose small increments against large totals, and both failures can remain invisible long enough to damage reporting, inventory confidence, and operator trust.
The engineering fix is straightforward in principle: use the right numeric architecture for the signal, define what “correct” means before commissioning, and validate the behavior under load. That is the difference between a ladder program that runs and a control strategy that remains reliable in production.
Keep exploring
Interlinking
Related link
Advanced Process Control and PID Simulation Hub →Related link
Scaling Math From Raw Bits to Engineering Units →Related reading
Software Filtering: First-Order Lag in Ladder Logic →Related reading
Practice totalizer troubleshooting in OLLA Lab ↗