What this article answers
Article summary
To prevent PID aliasing in PLC process control, the controller must sample the process variable fast enough relative to the highest meaningful process frequency. If scan time is too slow, the PLC can misrepresent process behavior, corrupt derivative and integral action, and destabilize the loop unless deterministic periodic task scheduling is used.
PID instability is not always a tuning problem. Sometimes the loop is tuned reasonably, but the controller is sampling reality too slowly to represent it correctly.
That distinction matters because a PLC is a discrete-time system, not a continuous observer. It only knows the process at each scan, and everything between scans is invisible to the algorithm. In practice, that means a loop can behave well in a fast software test and then misbehave on a loaded controller where scan time has drifted upward. The code did not become incorrect; the sampling assumptions did.
During internal benchmarking in the OLLA Lab simulation environment, increasing the virtual PLC scan time from 10 ms to 50 ms in a high-speed pressure-control scenario, while holding process dynamics and tuning constant, produced a 42% increase in accumulated integral error before loss of stable regulation. [Methodology: n=12 repeated runs of one pressure-loop task, baseline comparator = 10 ms scan condition, time window = 90 seconds per run.] This supports a bounded point: scan-time degradation alone can materially destabilize a fast loop. It does not prove a universal failure threshold for all PID applications.
What Is the Nyquist Theorem in PLC Process Control?
The Nyquist-Shannon sampling theorem states that a sampled system must sample at least twice as fast as the highest frequency component it needs to represent. In compact form:
f_s ≥ 2 f_max
Where:
- f_s = sampling frequency
- f_max = highest relevant signal frequency
In PLC process control, the practical translation is straightforward: scan rate functions as sampling rate for any logic that reads the process variable, computes control action, and updates the output.
If a pressure signal contains meaningful variation at 10 Hz, a PLC must sample at least 20 Hz, or every 50 ms, just to avoid formal aliasing. For usable control performance, engineers usually want substantially faster execution than the bare Nyquist minimum. Detection is not the same thing as control quality.
Why does this matter for a PID loop?
A PID loop assumes that the sampled process variable is a usable representation of the real process. If the sample interval is too large:
- peaks may be missed,
- apparent oscillation frequency may be distorted,
- derivative action may respond to false slopes,
- integral action may accumulate error against a misread process state.
The result is not merely noisy control. It can be mathematically incorrect control.
Common symptoms of aliasing in PLC-based PID loops
- Phantom frequencies: The process variable appears to oscillate at a lower frequency than the physical process actually contains. - Erratic derivative action: The calculated rate of change spikes because the controller is connecting sparse sample points with the wrong slope. - Actuator chatter: Valves, dampers, or drives react to sampled distortion rather than real process behavior. - Unexplained retuning cycles: Engineers keep changing gains when the underlying problem is execution timing, not controller aggressiveness.
A loop that looks mysteriously temperamental is often just under-sampled.
How Does the PLC Scan Cycle Act as a Sampling Rate?
A PLC samples the process through its execution cycle. In the standard model, that cycle is:
- Read inputs
- Execute logic
- Write outputs
That cycle defines the controller’s effective sampling interval for the control logic running inside it. If scan time is 20 ms, then the loop is effectively sampling at 50 Hz. If scan time drifts to 80 ms under CPU load, the effective sampling rate falls to 12.5 Hz.
This is why scan time is not a housekeeping detail. It is part of the control design.
Why does scan time drift matter?
Scan time is rarely fixed in a continuous main task. It changes with:
- added ladder rungs,
- communications overhead,
- HMI polling,
- data logging,
- alarm handling,
- motion or sequencing tasks,
- background diagnostics.
A loop that behaved during early commissioning can degrade later when the project grows. That is a common field pattern: Phase 1 logic is clean, Phase 3 logic is feature-complete, and the CPU quietly becomes part of the problem.
Continuous sweep versus periodic task execution
IEC 61131-3 supports task models that distinguish between continuous execution and scheduled periodic execution. For high-speed PID, that distinction is not stylistic. It is architectural.
A PID call placed in a main continuous task may execute with a variable Δt that changes with total program load. The same PID call placed in a 10 ms periodic task can execute with a deterministic Δt for integral and derivative calculation.
The code line may look identical. The execution context is not. In control work, identical logic in the wrong task is still wrong.
Why Do Slow Scan Times Break the PID Derivative Term First?
The derivative term is most vulnerable because it depends directly on rate of change:
D ∝ Δe / Δt
Where:
- Δe = change in error
- Δt = elapsed time between samples
If Δt is too large, one of two failures usually appears:
- The controller misses the real change entirely. A fast disturbance occurs between scans and the derivative term never sees its actual structure.
- The controller interprets sparse samples as a steep artificial slope. The process changed gradually in real time, but the PLC sees only two distant points and calculates a large apparent derivative.
Either way, derivative action becomes untrustworthy. That is why many practitioners say “D stands for danger” in noisy or poorly sampled loops.
What happens to the control output?
When derivative action amplifies a sampled error artifact, the control variable can:
- spike toward saturation,
- reverse direction too aggressively,
- excite oscillation instead of damping it,
- force the integral term into recovery behavior after the fact.
The loop then looks badly tuned even when the tuning constants were reasonable for a properly sampled system.
Does slow scan time also affect integral action?
Yes. Integral action is less flashy, but often just as damaging over time.
If the controller samples too slowly, the integral term accumulates error over a distorted representation of the process. That can produce:
- delayed correction,
- overshoot after long dead-time perception,
- windup during actuator saturation,
- sluggish recovery after disturbances.
Derivative usually fails first in a visible way. Integral often leaves the longer recovery problem.
Why Is the Main Continuous Task a Poor Home for High-Speed PID?
The main continuous task is convenient, but convenience is not the same as determinism. High-speed loops need a fixed and known execution interval so that the controller’s internal time assumptions remain valid.
A PID algorithm is not just evaluating error magnitude. It is evaluating error over time. If that time base changes from scan to scan, both integral and derivative calculations become inconsistent.
What does deterministic periodic scheduling solve?
A periodic task improves control reliability by providing:
- a fixed Δt for PID execution,
- predictable timing for loop updates,
- reduced sensitivity to unrelated program growth,
- cleaner separation between fast control and slower housekeeping logic.
This is the operational distinction:
- Continuous sweep: variable timing, broad convenience, weak determinism - Periodic task: fixed timing, narrower purpose, stronger control integrity
For fast loops, “it usually runs often enough” is not a control strategy.
What should be placed in periodic tasks?
As a general engineering pattern, periodic tasks are appropriate for:
- high-speed PID loops,
- fast analog conditioning,
- critical sequencing with tight timing assumptions,
- motion-adjacent control logic,
- time-sensitive fault detection.
Less time-critical logic can remain in slower or continuous tasks:
- reporting,
- noncritical alarming,
- recipe handling,
- HMI support,
- historian exchange.
The point is not to make everything fast. The point is to make the right things deterministic.
How Can You Recognize PID Aliasing in Real Commissioning Work?
PID aliasing often presents as a tuning issue, but the clues are usually timing-related. The loop may appear stable in one environment and unstable in another without any meaningful change in process physics.
Field indicators that point to sampling failure rather than bad gains
- The loop behaves in offline testing but fails on the production PLC under full program load.
- Oscillation frequency in the trend does not match what instrumentation or process knowledge suggests.
- Derivative action becomes erratic after additional logic, communications, or visualization features are added.
- Retuning helps briefly, then instability returns as the controller load changes again.
- The process variable trend looks stepped or unnaturally sparse relative to known process speed.
A useful correction to a common misconception
Aliasing is not the same thing as ordinary electrical noise. Noise is unwanted signal content. Aliasing is a sampling artifact created when the controller observes a signal too slowly. Filtering may help noise. It does not repeal sampling theory.
How Do You Simulate PID Aliasing Safely in OLLA Lab?
A live plant is a poor place to manufacture timing failures on purpose. Deliberately overloading a controller tied to pressure, flow, temperature, or chemical dosing equipment is not a serious validation method.
This is where OLLA Lab becomes operationally useful.
In OLLA Lab, engineers can build ladder logic, run it in simulation, observe live I/O and variable states, and validate behavior against a digital twin scenario while changing the virtual PLC execution speed. In the scan-time aliasing workflow, the physical simulation remains high fidelity while the user intentionally throttles the controller scan interval to observe when control quality degrades.
What the Scan Time Slider is for
The Scan Time Slider is best understood as a controlled fault-injection tool for timing assumptions. It allows the user to:
- hold process dynamics constant,
- hold tuning constants constant,
- vary virtual PLC scan time,
- observe when the sampled representation diverges from the simulated process,
- compare ladder state, I/O state, and equipment response under degraded timing.
That is a bounded product claim, not a universal one. OLLA Lab does not certify field competence or replace site commissioning. It provides a risk-contained environment to rehearse high-risk validation tasks that may be expensive or unsafe to stage on live equipment.
### Operational definition: digital twin validation
In this context, digital twin validation means testing control logic against a realistic simulated equipment model while observing whether commanded control actions, I/O transitions, and process-state changes remain causally consistent under normal and faulted conditions.
How Do You Run a Scan-Time Aliasing Test in OLLA Lab?
A useful aliasing exercise should isolate timing as the independent variable. If tuning, process model, and disturbance profile all change at once, the result becomes anecdotal rather than diagnostic.
Recommended test sequence
6. Observe and record the following:
- process variable trend,
- control variable response,
- derivative spikes,
- integral accumulation,
- actuator chatter or saturation,
- mismatch between equipment state and controller expectation.
- Select a fast-response process scenario. Pressure, flow, or low-inertia thermal loops are better demonstrations than slow tank-level examples.
- Build or load the PID ladder logic. Keep the control structure fixed across all runs.
- Define the baseline condition. Start with a fast scan time, such as 5 ms or 10 ms, and record stable behavior.
- Inject a repeatable disturbance. Use the same setpoint step, load change, or process upset for each run.
- Increase scan time incrementally. Move from 10 ms to 20 ms, 50 ms, 100 ms, and beyond while keeping other conditions constant.
- Move the loop into a periodic task model if available in the exercise design. Compare variable-scan behavior against deterministic execution.
What should you be looking for?
Look for the point where the controller stops representing the process faithfully. That threshold may appear as:
- delayed disturbance recognition,
- false low-frequency oscillation,
- unstable derivative output,
- overshoot that was absent at faster scans,
- recovery behavior that becomes inconsistent between identical runs.
The useful lesson is not that slow is always bad. The useful lesson is which process dynamics require which execution discipline.
What Does “Simulation-Ready” Mean for This Kind of Control Work?
“Simulation-Ready” should not mean merely being familiar with a ladder editor.
Operationally, a Simulation-Ready engineer can:
- prove what correct means before deployment,
- observe process and controller state together,
- diagnose timing-related failure modes,
- inject faults without losing causal traceability,
- revise logic based on evidence,
- show why the revised logic is more robust.
For PID work, Simulation-Ready behavior includes verifying that:
- loop timing assumptions are explicit,
- scan rate is appropriate for process dynamics,
- derivative action is not being trusted on undersampled data,
- periodic task scheduling is used where determinism matters,
- fault response remains coherent when timing degrades.
What Engineering Evidence Should You Produce Instead of a Screenshot Gallery?
A credible control portfolio is a compact body of engineering evidence, not a folder of attractive trends with no argument attached.
Use this structure:
State measurable acceptance criteria: settling time, overshoot, steady-state error, actuator limits, alarm behavior, and fault response.
Explain what changed: task scheduling, filtering, gain adjustment, anti-windup logic, derivative handling, or interlock behavior.
- System Description Define the process, actuator, sensor, task rate, and control objective.
- Operational definition of correct
- Ladder logic and simulated equipment state Show the control logic together with the simulated machine or process behavior it is intended to govern.
- The injected fault case Document the timing fault, disturbance, sensor anomaly, or CPU-load condition introduced.
- The revision made
- Lessons learned State what the failure revealed and what design rule now follows from it.
This format is stronger than a screenshot gallery because it preserves causality.
What Standards and Literature Support This View of Sampling and Deterministic Control?
The relationship between sampling rate and signal fidelity is foundational in digital control theory, not a product-specific idea. Nyquist-Shannon sampling remains the relevant mathematical basis for understanding aliasing in sampled systems.
IEC 61131-3 provides the programming and task-structuring framework within which PLC execution timing is implemented. For safety-related and high-consequence applications, the broader discipline of deterministic behavior, validation, and bounded failure response is consistent with the engineering expectations found in IEC 61508 and related functional-safety practice. Those standards do not reduce to “run PID fast,” but they do reinforce a larger point: timing assumptions must be explicit, justified, and validated.
Simulation-based validation is also well supported in industrial and control literature, especially where live-system testing is constrained by safety, cost, or operational continuity. The exact fidelity required depends on the task. For timing-sensitive loop behavior, the simulation only becomes useful when it preserves the causal relationship between process change, controller execution, and output response.
Conclusion
PID aliasing is a sampling failure before it is a tuning failure. If the PLC does not sample the process fast enough, the controller is solving the wrong problem with unjustified confidence.
The practical remedy is equally clear:
- match scan rate to process dynamics,
- avoid placing fast PID loops in variable-time continuous sweeps,
- use deterministic periodic task scheduling,
- validate timing assumptions in simulation before touching live equipment.
OLLA Lab fits into that workflow as a bounded validation environment. It allows engineers to rehearse the part that real plants are least interested in donating for educational purposes: controlled failure.
Keep exploring
Interlinking
Related link
Advanced Process Control and PID Simulation Hub →Related link
How to Program Safety Interlocks and E-Stop Chains →Related link
How to Test PLC What-If Scenarios in VR for Failure Analysis →Related reading
Validate scan-time behavior using OLLA Lab ↗