Discussion:
PID controller with a limited range of control variable
(too old to reply)
pozz
2023-04-26 06:57:08 UTC
Permalink
In one application an AC voltage sourced by a generator is applied to a
load as a PWM waveform. The period PER is fixed (for example, 100ms) and
the pulse-width PW has a maximum (for example, 20ms).

The user can change the voltage peak (and consequently the voltage RMS)
by an handle, while the controller manages automatically the PW.

The load changes its temperature when the PWM waveform is applied to it
and a sensor that read the temperature is present.

There's a temperature upper limit (for example, 40°C) that mustn't be
exceeded. If needed, the controller should decrease the PW accordingly
to avoid temperature overshoot. Otherwise, the PW is the maximum allowed
(20ms).

Of course, when the voltage peak is low, PW could be the maximum. When
the user increases the voltage, the controller should be smart enough to
reduce PW accordingly and fast to avoid temperature overshoot.

I was thinking to implement a PID controller with the load temperature
as the process variable PV and the thermal power as the control variable
CV. From the thermal power I could calculate the PW to use.

CV = Vrms^2 / Z * PW / PER

Apart calibrating the PID constants, there's another problem. When the
voltage is low, the CV from the PID algorithm would be greater than the
maximum allowed (limited by PW maximum duration). The system could stay
in this state for minutes, so the PW calculated by the PID assumes very
great values with time.

When the user decides to increase the voltage and there's the moment to
reduce the PW, the PID behaviour could be very slow, because it would
start from a very high PW (very high sum of errors of the integrative part).

What's the trick here to avoid this behaviour? Should I limit the sum of
errors of the I term? Should I re-initialize the PID state in some way?
Ed Prochak
2023-04-27 21:14:00 UTC
Permalink
Post by pozz
In one application an AC voltage sourced by a generator is applied to a
load as a PWM waveform. The period PER is fixed (for example, 100ms) and
the pulse-width PW has a maximum (for example, 20ms).
The user can change the voltage peak (and consequently the voltage RMS)
by an handle, while the controller manages automatically the PW.
Looks like an unusual feature to me. Can you share what the purpose of varying
the supply voltage?
Post by pozz
The load changes its temperature when the PWM waveform is applied to it
and a sensor that read the temperature is present.
There's a temperature upper limit (for example, 40°C) that mustn't be
exceeded. If needed, the controller should decrease the PW accordingly
to avoid temperature overshoot. Otherwise, the PW is the maximum allowed
(20ms).
Of course, when the voltage peak is low, PW could be the maximum. When
the user increases the voltage, the controller should be smart enough to
reduce PW accordingly and fast to avoid temperature overshoot.
I was thinking to implement a PID controller with the load temperature
as the process variable PV and the thermal power as the control variable
CV. From the thermal power I could calculate the PW to use.
CV = Vrms^2 / Z * PW / PER
Apart calibrating the PID constants, there's another problem. When the
voltage is low, the CV from the PID algorithm would be greater than the
maximum allowed (limited by PW maximum duration). The system could stay
in this state for minutes, so the PW calculated by the PID assumes very
great values with time.
When the user decides to increase the voltage and there's the moment to
reduce the PW, the PID behaviour could be very slow, because it would
start from a very high PW (very high sum of errors of the integrative part).
What's the trick here to avoid this behaviour? Should I limit the sum of
errors of the I term? Should I re-initialize the PID state in some way?
I think the easiest solution is to cap the error/PW value.

Good luck,
Ed

Loading...