AI Industrial Automation

Article playbook

How to Scale Analog Inputs to Engineering Units in PLCs

Learn how PLC analog scaling converts raw input counts into engineering units using linear math, how resolution and data types affect results, and how to validate scaling safely in OLLA Lab.

Direct answer

To scale an analog input in a PLC, engineers convert raw digital counts from an input card into physical engineering units using a linear equation derived from y = mx + b. Resolution, data type selection, and validation method determine whether that value is merely plausible or actually trustworthy.

What this article answers

Article summary

To scale an analog input in a PLC, engineers convert raw digital counts from an input card into physical engineering units using a linear equation derived from y = mx + b. Resolution, data type selection, and validation method determine whether that value is merely plausible or actually trustworthy.

Analog scaling is not a cleanup step after wiring. It is the mathematical bridge between a sensor’s electrical signal and the number your logic, alarms, trends, and PID loops will use. If that bridge is wrong, the rest of the control strategy may also be wrong.

Ampergon Vallis metric: In an internal OLLA Lab validation using a 0–100 PSI transmitter profile, moving from a 12-bit input model to a 16-bit input model reduced the minimum measurable step from 0.0244 PSI to 0.0015 PSI, a 93.8% reduction in quantization interval. Methodology: 1 simulated pressure-scaling task, 12-bit profile compared against 16-bit profile, measured on 3/24/2026. This supports a narrow point about resolution granularity in a defined scaling case. It does not by itself prove better loop performance in every plant, because loop quality also depends on transmitter accuracy, filtering, scan time, tuning, and process dynamics.

A common misconception is that if the rung compiles, the scaling is fine. It is not. Syntax is not deployability.

What is the standard PLC scaling formula (y = mx + b)?

The standard PLC scaling formula is a linear mapping from a raw digital input range to an engineering-unit range. In plain terms, it answers one question: given this integer from the input card, what physical value does it represent?

The expanded industrial scaling formula

Scaled Value = [((Raw Input - Raw Min) × (EU Max - EU Min)) / (Raw Max - Raw Min)] + EU Min

This is the practical PLC form of the point-slope relationship derived from y = mx + b.

What each term means

- Raw Input: the current integer value reported by the analog input card - Raw Min: the integer corresponding to the low end of the signal range - Raw Max: the integer corresponding to the high end of the signal range - EU Min: the low engineering-unit value, such as 0 PSI - EU Max: the high engineering-unit value, such as 100 PSI

Why PLCs use this formula

PLCs do not read pressure, level, or temperature directly. They read a converted integer produced by the analog input hardware.

For example:

  • A pressure transmitter may output 4–20 mA
  • The PLC analog card converts that current into a digital count
  • The ladder logic scales that count into 0–100 PSI

Without scaling, the controller only knows that it received a number. It does not know whether that number means 47.2 PSI or something else.

### Example: scaling a 0–100 PSI transmitter

Assume:

  • Raw Min = 0
  • Raw Max = 32767
  • EU Min = 0.0 PSI
  • EU Max = 100.0 PSI
  • Raw Input = 16384

Then:

Scaled = [((16384 - 0) × (100.0 - 0.0)) / (32767 - 0)] + 0.0

Scaled ≈ 50.0 PSI

That is the core job of analog scaling: convert card counts into values the process can use.

How do raw analog signals become PLC integers?

Analog sensors produce continuous electrical signals, while PLC logic works on discrete digital values. The analog input card performs the conversion.

The physics and hardware path

A typical path looks like this:

  • The field device generates a continuous signal, such as 4–20 mA or 0–10 V
  • The PLC analog input module samples that signal
  • The module’s analog-to-digital converter assigns the signal to a discrete integer
  • The PLC program scales that integer into engineering units

This matters because the PLC never sees an infinitely smooth physical signal. It sees a finite number of digital steps. That is where resolution enters the story.

Common raw ranges in practice

Raw ranges vary by platform and module design. Examples include:

  • 0 to 4095 for a 12-bit range
  • 0 to 32767 for a 15-bit signed or vendor-normalized range
  • 0 to 65535 for a 16-bit unsigned range

The exact raw range is vendor-specific. The scaling method is not.

How does 12-bit vs. 16-bit resolution affect analog precision?

Bit depth determines how many discrete values the input card can represent across the signal range. More bits mean finer granularity and a lower quantization interval.

Resolution math

The number of available steps is:

2^n

Where n is the bit depth.

So:

  • 12-bit = 4096 steps
  • 15-bit = 32768 steps
  • 16-bit = 65536 steps

Step size for a 0–100 PSI transmitter

For a 0–100 PSI range, the approximate engineering-unit step size is:

Step Size = EU Range / (Raw Steps - 1)

| Resolution | Integer Range | Approx. Step Size for 0–100 PSI | |---|---:|---:| | 12-bit | 0 to 4095 | 0.0244 PSI/step | | 15-bit | 0 to 32767 | 0.0030 PSI/step | | 16-bit | 0 to 65535 | 0.0015 PSI/step |

What this means operationally

Higher resolution reduces quantization error. That improves the fidelity of the value presented to logic, alarming, trending, and closed-loop control.

A few boundaries matter:

  • Better resolution does not automatically mean better measurement accuracy
  • It does not correct bad transmitter calibration
  • It does not resolve poor grounding, noise, or bad loop tuning

It does mean the card can distinguish smaller changes.

Why this matters for PID loops

PID loops react to measured process value. If the measured value updates in coarse steps, the controller sees a chunked version of reality.

That can contribute to:

  • output hunting
  • poor fine control near setpoint
  • noisy derivative behavior
  • awkward trend interpretation

Resolution is not the only variable in loop quality, but it is one of them.

Why do integer truncation errors occur in analog scaling?

Integer truncation occurs because PLC math follows data types strictly. If you divide integers using integer math, the fractional remainder is discarded.

That is not a software bug. It is the expected result of integer arithmetic.

The core hazard

If a ladder routine performs this operation with INT values:

16384 / 32767 = 0

The PLC does not preserve the decimal portion. It truncates the result to 0.

If that truncated result is then multiplied by the engineering range, the scaled value collapses incorrectly.

Why operation order matters

This sequence is risky when using integer data types:

  1. Divide first
  2. Multiply second
  3. Store result in INT

That sequence often destroys precision before the logic can use it.

This sequence is safer:

In short: preserve precision before division.

  1. Subtract offsets
  2. Multiply numerator first
  3. Convert to REAL
  4. Divide using floating-point math
  5. Add engineering-unit offset

Example of bad integer scaling

Assume:

  • Raw Input = 16384
  • Raw Max = 32767
  • EU Range = 100

If the logic computes:

(16384 / 32767) × 100

Using integer math:

  • 16384 / 32767 = 0
  • 0 × 100 = 0

The result is 0 PSI, which is clearly false.

Example of correct floating-point scaling

If the logic computes:

(16384 × 100.0) / 32767

Using REAL math:

  • 1638400.0 / 32767 ≈ 50.0

The result is correct.

Where truncation becomes expensive

Truncation errors are especially damaging in:

  • flow totalization
  • energy calculations
  • batching
  • dose control
  • long-running accumulation logic

A single lost fraction may look harmless. Repeated many times, it can become operationally significant.

What data types should you use for PLC analog scaling?

Use integer types for raw card values and floating-point types for scaled engineering values and intermediate math where fractional precision matters.

A practical rule

A defensible default is:

- Raw input: INT or DINT, depending on platform - Intermediate math: REAL - Scaled engineering value: REAL

This keeps the hardware-facing value in its native form while preserving fractional precision in the calculation.

Why REAL matters

Engineering units are often fractional:

  • 47.3 PSI
  • 62.8%
  • 18.6 GPM
  • 101.2 °C

If the process variable can be fractional, the math path should usually be fractional too.

Additional implementation checks

Also verify:

  • the analog card’s actual raw range from vendor documentation
  • whether the module reserves counts for underrange or overrange
  • whether signed values are used
  • whether filtering or averaging is applied before scaling
  • whether alarm thresholds are defined in raw or engineering units

The formula is universal. The endpoints are not.

How do you write analog scaling logic in ladder form?

A typical ladder implementation uses a sequence of math instructions that mirrors the expanded scaling formula.

Ladder math block sequence

Rung 1: SUB Raw_Input Raw_Min -> Raw_Offset

Rung 2: SUB EU_Max EU_Min -> EU_Range

Rung 3: MUL Raw_Offset EU_Range -> Numerator_REAL

Rung 4: SUB Raw_Max Raw_Min -> Raw_Range

Rung 5: DIV Numerator_REAL Raw_Range -> Scaled_Offset_REAL

Rung 6: ADD Scaled_Offset_REAL EU_Min -> Scaled_Value_REAL

Example values for a 4–20 mA equivalent raw range

If a module maps the signal to 0–32767 and the transmitter represents 0.0–100.0 PSI, then:

  • Raw_Min = 0
  • Raw_Max = 32767
  • EU_Min = 0.0
  • EU_Max = 100.0

If your platform uses a live-signal range such as counts corresponding only to 4–20 mA, adjust the raw endpoints accordingly. This is one of the most common sources of quiet scaling error.

How do you simulate analog scaling math in OLLA Lab?

Analog scaling should be validated in a safe environment before it is trusted on a live process. In OLLA Lab, that means observing the raw value, the intermediate math behavior, and the final engineering-unit output inside a browser-based simulation workflow.

What “Simulation-Ready” means here

In this article, Simulation-Ready means an engineer can:

  • inject a defined input condition
  • observe the controller’s intermediate logic states
  • compare ladder-state math to simulated equipment or signal behavior
  • diagnose incorrect scaling or data-type handling
  • revise the logic
  • verify the corrected result before deployment

That is a validation behavior, not a claim of field readiness by itself.

A practical validation workflow in OLLA Lab

Use OLLA Lab as a bounded rehearsal environment for scaling logic:

  1. Inject a raw value Use the simulation environment to apply a known analog input condition.
  2. Monitor intermediate math states Observe the outputs of the SUB, MUL, and DIV steps in the ladder logic editor.
  3. Check the Variables Panel Compare the raw integer, intermediate values, and final REAL engineering-unit tag.
  4. Verify against expected math Confirm that the simulated result matches the hand-calculated value.
  5. Test edge conditions Validate low-end, mid-range, high-end, underrange, and overrange behavior.
  6. Deliberately break the data types Force an integer-only version and observe the truncation error.

Why the Variables Panel matters

The Variables Panel is useful because it exposes the relationship between:

  • raw I/O values
  • tag states
  • analog values
  • scaled outputs

That visibility helps distinguish between logic that looks correct and logic that has been checked.

Image alt text: Screenshot of the OLLA Lab Variables Panel displaying an analog scaling routine. The raw 16-bit integer value of 16384 is shown scaled to a floating-point engineering unit of 50.0 PSI.

What should you verify before using a scaled analog value in control logic?

A scaled value is only trustworthy if the full signal path has been checked. Engineers should verify both the mathematics and the operating assumptions behind it.

Minimum verification checklist

  • Confirm the actual raw range from the analog module documentation
  • Confirm the sensor’s calibrated engineering range
  • Verify whether the input is signed or unsigned
  • Use REAL math where fractional precision matters
  • Check midpoint scaling with a known test value
  • Check low-end and high-end endpoints
  • Verify alarm and trip thresholds in the same unit domain
  • Confirm whether filtering affects displayed versus control values
  • Validate abnormal conditions such as signal loss or out-of-range input

A field-aware distinction

A value can be mathematically correct and still be operationally wrong if the transmitter range, card configuration, or alarm philosophy is mismatched.

How should engineers document analog scaling skill as evidence?

Engineers should document analog scaling as a compact body of engineering evidence, not a gallery of screenshots. The point is to show reasoning, validation method, and revision discipline.

Use this structure:

State what counts as success: endpoint match, midpoint accuracy, alarm threshold behavior, and acceptable precision.

Document the change: data type conversion, reordered math, corrected raw range, or adjusted alarm basis.

  1. System Description Define the signal source, raw range, engineering range, and control purpose.
  2. Operational definition of “correct”
  3. Ladder logic and simulated equipment state Show the scaling logic and the corresponding simulated signal or equipment condition.
  4. The injected fault case Introduce a realistic error, such as wrong raw max, integer-only division, or mismatched 4–20 mA endpoints.
  5. The revision made
  6. Lessons learned Explain what failed, why it failed, and how the corrected logic was verified.

What standards and literature support careful analog validation and simulation practice?

Analog scaling itself is basic control math, but the discipline of validating control behavior before deployment is supported by standards and industrial literature.

Relevant standards and guidance

  • IEC 61508 emphasizes systematic capability, validation discipline, and lifecycle rigor for electrical, electronic, and programmable electronic safety-related systems.
  • ISA-5.1 supports consistent instrumentation identification and documentation practices, which matter when scaling logic must align with actual field devices.
  • exida guidance on automation and safety lifecycle practice consistently stresses verification, configuration control, and fault-aware validation before live operation.

Why simulation belongs in the workflow

Simulation is useful because it allows engineers to test control behavior under repeatable conditions without exposing a live process to unnecessary risk. That is particularly relevant when validating:

  • alarm thresholds
  • analog scaling
  • interlocks
  • sequencing
  • abnormal-state handling

A digital twin or simulator does not replace field commissioning. It can reduce avoidable surprises before field commissioning begins.

Conclusion

Scaling analog inputs in a PLC is a linear math problem with operational consequences. The formula is straightforward, but resolution limits, raw-range assumptions, and integer truncation can quietly corrupt the result.

The practical standard is simple:

  • know the module’s real raw range
  • scale with the correct endpoints
  • use floating-point math where needed
  • validate the result before deployment

OLLA Lab fits that workflow as a bounded validation environment. It allows users to observe raw counts, intermediate math, and final engineering values in one place, then test failure cases safely. That does not make someone site-competent by itself. It may make scaling errors cheaper to find.

Keep exploring

Interlinking

References

Editorial transparency

This blog post was written by a human, with all core structure, content, and original ideas created by the author. However, this post includes text refined with the assistance of ChatGPT and Gemini. AI support was used exclusively for correcting grammar and syntax, and for translating the original English text into Spanish, French, Estonian, Chinese, Russian, Portuguese, German, and Italian. The final content was critically reviewed, edited, and validated by the author, who retains full responsibility for its accuracy.

About the Author:PhD. Jose NERI, Lead Engineer at Ampergon Vallis

Fact-Check: Technical validity confirmed on 2026-03-23 by the Ampergon Vallis Lab QA Team.

Ready for implementation

Use simulation-backed workflows to turn these insights into measurable plant outcomes.

© 2026 Ampergon Vallis. All rights reserved.
|