ZSTD_compress() and friends would treat an empty input as an unknown size
when selecting parameters. Thus, they would drastically overallocate the
context. Tell ZSTD_getParams() that the source size is 1 when it is empty.
it was invoking ZSTD_initCStream_advanced() with pledgedSrcSize==0 and contentSizeFlag=1
which means "empty"
while the intention was to mean "unknown".
The contentSizeFlag==1 is new, it is a consequence of setting this value to 1 by default.
The solution selected here is to pass ZSTD_CONTENTSIZE_UNKNOWN to mean "unknown".
So contentSizeFlag remains set (it wasn't in previous versions).
It was multiple reasons stacked :
- Visual use a different code path, because ZSTD_NEWAPI is not defined
- fileio.c sends `0` as `pledgedSrcSize` to mean `ZSTD_CONTENTSIZE_UNKNOWN` (fixed)
- ZSTDMT_resetCCtx() interpreted `0` as "empty" instead of "unknown" (fixed)
It isn't useful in any case to repeat default tables.
Saves a few bytes on Silesia, since we don't trigger the dictionary
heuristic.
Before: 211988480 => 73651998 bytes
After: 211988480 => 73651721 bytes
when determining compression parameters
to compress one file only.
For multiple files, it still "bets" that files are going to be small.
There was also a bug recently added in ZSTD_CCtx_loadDictionary_advanced()
making it incapable to use pledgedSrcSize to determine compression parameters.
to mean "pledgedSrcSize is not known at init time" instead of `0`.
Note that, a few prototypes created and documented with `0` to mean "unknown" still interpret "0" as unknown,
to avoid breaking 3rd party applications which depend on this behavior.
But this value is no longer recommended to mean "unknown".
In some future version, it might be possible to switch "0" to mean "empty",
as is already the case for several prototypes.
The advantage is that pledgedSrcSize field would have same behavior accross entire API,
making it easier to reason about.
Note that all concerned prototypes belong to the "experimental" API section.
srcSize is controlled at end of compression,
so if someone uses "0" to mean "unknown" while it effectively means "empty",
this is immediately caught by the compression function, which generates an error code : ZSTD_ERROR_srcSize_wrong
In `ZSTD_compressBegin_advanced()`, `ZSTD_parameters` are used to set the
compression parameters, but the level didn't get set to `CLEVEL_CUSTOM`, so
`ZSTD_compressBlock()` used the wrong parameters when checking the source
size.
ZSTD_compressBound() works fine, but is only useful for dynamic allocation.
For static allocation, only a macro can provide the amount during compilation time.
It's not good to mix old and new API
ZSTD_resetCStream() doesn't just set pledgedSrcSize :
it also sets the CCtx for a single thread compression.
Problem is, when 2+ threads are defined in cctx->requestedParams,
ZSTD_compress_generic() will want to start MT compression,
since initialization is supposed to have already happened (thanks to ZSTD_resetCStream())
except that the underlying ZSTDMT_CCtx* object is not created,
resulting in a segfault.
This is an invalid construction
(correct one is to use ZSTD_CCtx_setPledgedSrcSize()).
I haven't found a nice way to mitigate this impact if someone makes the same mistake.
At some point, removing the old API to keep only the new API within fileio.c will limit these risks.
srcSize is read and provided at each file, not at resource creation.
This used to be useful with older API, because it could not re-adapt parameters between sessions.
At some point, it will be better to remove the old code, and only keep the new_api.
It works fine by now.
In some complex scenario,
the buffer would be freed because it's too large,
another buffer would be allocated, but fail,
trigger an error,
and the general buffer pool would then be freed,
where the definition of the already freed buffer would be found
(beyond total index, but still), and freed again, resulting in double-free error.
It would previously exit when srcSize is unknown.
But in the case of custom parameters,
hLog and cLog can still be too large in comparison with windowLog.
Reduces maximum memory allocated during zstreamtest --newapi
It does not feel "right" from a dependency perspective.
ZSTD_initDCtx_internal() is triggered once, on DCtx creation,
while ZSTD_decompressBegin() is invoked at the beginning of each new frame,
and is also a user-facing prototype.
Downside : a DCtx must be init before first usage !
This was always the intention by the way, and is documented as such.
This stage is automatically done within ZSTD_decompress() and variants,
and also within ZSTD_decompressStream().
Only ZSTD_decompressContinue() is impacted,
it must be preceded by a ZSTD_decompressBegin(), as detailed in doc.
A test has been fixed, to no longer rely on undocumented assumption that ZSTD_decompressBegin() is invoked during init.
decoder output buffer would receive a wrong size.
In previous version, ZSTD_decompressStream() would blindly trust the caller that pos <= size.
In this version, this condition is actively checked,
and the function returns an error code if this condition is not respected.
This check could also be done with an assert(),
but since this is a user-facing interface, it seems better to keep this check at runtime.
* Maximum window size in 32-bit mode is 1GB, since allocations for 2GB fail
on my Mac.
* Maximum window size in 64-bit mode is 2GB, since that is the largest
power of 2 that works with the overflow prevention.
* Allow `--long=windowLog` to set the window log, along with
`--zstd=wlog=#`. These options also set the window size during
decompression, but don't override `--memory=#` if it is set.
* Present a helpful error message when the window size is too large during
decompression.
* The long range matcher defaults to a hash log 7 less than the window log,
which keeps it at 20 for window log 27.
* Keep the default long range matcher window size and the default maximum
window size at 27 for the API and CLI.
* Add tests that use the maximum window size and hash size for compression
and decompression.
resulting in undefined symbol error.
Push the requirement to GCC 4 for now.
Another solution, proposed by @NWilson, is to use __LONG_MAX__ instead.
__LONG_MAX__ is a GCC-specific constant, which value is supposed to depend on underlying target hardware (32/64 bits)
Might be better, but seems also more complex, hence more prone to side effects.
Keeping the simple solution for now (just rely on __GNUC__)
supporting function for bufferless streaming API (ZSTD_decompressContinue())
makes it possible to correctly size a round buffer for decoding using this API.
also : added field blockSizeMax within ZSTD_frameHeader,
as it's a necessary information to know when to restart at beginning of decoding buffer.