IMPORT: slz: use the generic function for the last bytes of the crc32

This is the only place where we conditionally use the crc32_fast table,
better call the crc32_char inline function for this. This should also
reduce by ~1kB the L1 cache footprint of the compression when dealing
with small blocks, and at least shows a consistent 0.5% perf improvement.

This is slz upstream commit 075351b6c2513b548bac37d6582e46855bc7b36f.
diff --git a/src/slz.c b/src/slz.c
index a9104ea..76e89e2 100644
--- a/src/slz.c
+++ b/src/slz.c
@@ -969,7 +969,7 @@
 	}
 
 	while (buf < end)
-		crc = crc32_fast[0][(crc ^ *buf++) & 0xff] ^ (crc >> 8);
+		crc = crc32_char(crc, *buf++);
 	return crc;
 }