BUG/MINOR: freq_ctr: fix possible negative rate with the scaled API

In 1.9 with commit 627505d36 ("MINOR: freq_ctr: add swrate_add_scaled()
to work with large samples") we got the ability to indicate when adding
some values that they represent a number of samples. However there is an
issue in the calculation which is that the number of samples that is
added to the sum before the division in order to avoid fading away too
fast, is multiplied by the scale. The problem it causes is that this is
done in the negative part of the expression, and that as soon if the sum
of old_sum and v*s is too small (e.g. zero), we end up with a negative
value of -s.

This is visible in "show pools" which occasionally report a very large
value on "needed_avg" since 2.9, though the bug has been there for longer.
Indeed in 2.9 since they're hashed in buckets, it suffices that any
thread got one such error once for the sum to be wrong. One possible
impact is memory usage not shrinking after a short burst due to pools
refraining from releasing objects, believing they don't have enough.

This must be backported to all versions. Note that the opportunistic
version can be dropped before 2.8.

(cherry picked from commit e3b2704e26a32bb67f4921193acef167962cf5db)
Signed-off-by: Christopher Faulet <cfaulet@haproxy.com>
(cherry picked from commit 3fde519c08f89e619b51c644bc641c926ebd175c)
[cf: opportunistic version dropped as expected]
Signed-off-by: Christopher Faulet <cfaulet@haproxy.com>
(cherry picked from commit c5e0f7c28c3fc681f039526e2f294fe0ded9ef45)
Signed-off-by: Christopher Faulet <cfaulet@haproxy.com>
(cherry picked from commit 1bb1d5e0ebe0c8b344f354ad33fab014baf34c9f)
[cf: ctx adjt]
Signed-off-by: Christopher Faulet <cfaulet@haproxy.com>
1 file changed