BUG/MEDIUM: connection: close a rare race between idle conn close and takeover
The takeover of idle conns between threads is particularly tricky, for
two reasons:
- there's no way to atomically synchronize kernel-side polling with
userspace activity, so late events will always be reported for some
FDs just migrated ;
- upon error, an FD may be immediately reassigned to whatever other
thread since it's process-wide.
The current model uses the FD's thread_mask to figure if an FD still
ought to be reported or not, and a per-thread idle connection queue
from which eligible connections are atomically added/picked. I/Os
coming from the bottom for such a connection must remove it from the
list so that it's not elected. Same for timeout tasks and iocbs. And
these last ones check their context under the idle_conn lock to judge
if they're still allowed to run.
One rare case was omitted: the wake() callback. This one is rare, it
may serve to notify about finalized connect() calls that are not being
polled, as well as unhandled shutdowns and errors. This callback was
not protected till now because it wasn't seen as sensitive, but there
exists a particular case where it may be called without protectoin in
parallel to a takeover. This happens in the following sequence:
- thread T1 wants to establish an outgoing connection
- the connect() call returns EINPROGRESS
- the poller adds it using epoll_ctl()
- epoll_wait() reports it, connect() is done. The connection is
not being marked as actively polled anymore but is still known
from the poller.
- the request is sent over that connection using send(), which
queues to system buffers while data are being delivered
- the scheduler switches to other tasks
- the request is physically sent
- the server responds
- the stream is notified that send() succeeded, and makes progress,
trying to recv() from that connection
- the recv() succeeds, the response is delivered
- the poller doesn't need to be touched (still no active polling)
- the scheduler switches to other tasks
- the server closes the connection
- the poller on T1 is notified of the SHUTR and starts to call mux->wake()
- another thread T2 takes over the connection
- T2 continues to run inside wake() and releases the connection
- T2 is just dereferencing it.
- BAM.
The most logical solution here is to surround the call to wake() with an
atomic removal/insert of the connection from/into the idle conns lists.
This way, wake() is guaranteed to run alone. Any other poller reporting
the FD will not have its tid_bit in the thread_mask si will not bother
it. Another thread trying a takeover will not find this connection. A
task or tasklet being woken up late will either be on the same thread,
or be called on another one with a NULL context since it will be the
consequence of previous successful takeover, and will be ignored. Note
that the extra cost of a lock and tree access here have a low overhead
which is totally amortized given that these ones roughly happen 1-2
times per connection at best.
While it was possible to crash the process after 10-100k req using H2
and a hand-refined configuration achieving perfect synchronism between
a long (20+) chain of proxies and a short timeout (1ms), now with that
fix this never happens even after 10M requests.
Many thanks to Olivier for proposing this solution and explaining why
it works.
This should be backported as far as 2.2 (when inter-thread takeover was
introduced). The code in older versions will be found in conn_fd_handler().
A workaround consists in disabling inter-thread pool sharing using:
tune.idle-pool.shared off
1 file changed