MEDIUM: compression: postpone buffer adjustments after compression

Till now we used to copy the pending outgoing data into the new buffer,
then compute the chunk size, then compress, then fix the chunk size,
then copy the remaining data into the destination buffer. If the
compression would fail for whatever reason (eg: not enough input bytes
to push an extra block), this work still had to be performed for no
added value. It also presents the disadvantage of having to use a fixed
length to encode the chunk size.

Thanks to the body parser changes that went late into 1.5, the buffers
are not modified anymore during these operations. So this patch rearranges
operations so that they're more optimal :

1) init() prepares a new buffer and reserves space in it for pending
   outgoing data (no copy) and for chunk size
2) data are compressed
3) only if data were added to the buffer, then the old data are copied
   and the chunk size is set.

A few optimisations are still possible to go further :

  - decide whether we prefer to copy pending outgoing data from the
    old buffer to the new one, or pending incoming compressed data
    from the new one to the old one, based on the amount of outgoing
    data available. Given that pending outgoing data are rare and the
    operation could be complex in the presence of extra input data,
    it's probably better to ignore this one ;

  - compute the needed length for the chunk size. This would avoid
    sending lots of leading zeroes when not needed.
1 file changed