OPTIM: raw-sock: don't speculate after a short read if polling is enabled

This is the reimplementation of the "done" action : when we experience
a short read, we're almost certain that we've exhausted the system's
buffers and that we'll meet an EAGAIN if we attempt to read again. If
the FD is not yet polled, the stream interface already takes care of
stopping the speculative read. When the FD is already being polled, we
have two options :
  - either we're running from a level-triggered poller, in which case
    we'd rather report that we've reached the end so that we don't
    speculate over the poller and let it report next time data are
    available ;

  - or we're running from an edge-triggered poller in which case we
    have no choice and have to see the EAGAIN to re-enable events.

At the moment we don't have any edge-triggered poller, so it's desirable
to avoid speculative I/O that we know will fail.

Note that this must not be ported to SSL since SSL hides the real
readiness of the file descriptor.

Thanks to this change, we observe no EAGAIN anymore during keep-alive
transfers, and failed recvfrom() are reduced by half in http-server-close
mode (the client-facing side is always being polled and the second recv
can be avoided). Doing so results in about 5% performance increase in
keep-alive mode. Similarly, we used to have up to about 1.6% of EAGAIN
on accept() (1/maxaccept), and these have completely disappeared under
high loads.
diff --git a/include/proto/fd.h b/include/proto/fd.h
index c87dc3d..4f75bd6 100644
--- a/include/proto/fd.h
+++ b/include/proto/fd.h
@@ -252,6 +252,17 @@
 	updt_fd(fd);
 }
 
+/* Disable readiness when polled. This is useful to interrupt reading when it
+ * is suspected that the end of data might have been reached (eg: short read).
+ * This can only be done using level-triggered pollers, so if any edge-triggered
+ * is ever implemented, a test will have to be added here.
+ */
+static inline void fd_done_recv(const int fd)
+{
+	if (fd_recv_polled(fd))
+		fd_cant_recv(fd);
+}
+
 /* Report that FD <fd> cannot send anymore without polling (EAGAIN detected). */
 static inline void fd_cant_send(const int fd)
 {
diff --git a/src/listener.c b/src/listener.c
index ba7d727..836ca70 100644
--- a/src/listener.c
+++ b/src/listener.c
@@ -356,7 +356,7 @@
 				return;
 			default:
 				/* unexpected result, let's give up and let other tasks run */
-				return;
+				goto stop;
 			}
 		}
 
@@ -414,6 +414,8 @@
 	} /* end of while (max_accept--) */
 
 	/* we've exhausted max_accept, so there is no need to poll again */
+ stop:
+	fd_done_recv(fd);
 	return;
 }
 
diff --git a/src/raw_sock.c b/src/raw_sock.c
index a67a8d9..fda7de1 100644
--- a/src/raw_sock.c
+++ b/src/raw_sock.c
@@ -176,6 +176,7 @@
 			 * being asked to poll.
 			 */
 			conn->flags |= CO_FL_WAIT_ROOM;
+			fd_done_recv(conn->t.sock.fd);
 			break;
 		}
 	} /* while */
@@ -299,6 +300,8 @@
 				 */
 				if (fdtab[conn->t.sock.fd].ev & FD_POLL_HUP)
 					goto read0;
+
+				fd_done_recv(conn->t.sock.fd);
 				break;
 			}
 			count -= ret;