Makefile now automatically detects modifications of compilation flags,
and produce object files in directories dedicated to this compilation flags.
This makes it possible, for example, to compile libzstd with different DEBUGLEVEL.
Object files sharing the same configration will be generated into their dedicated directories.
Also : new compilation variables
- DEBUGLEVEL : select the debug level (assert & traces) inserted during compilation (default == 0 == release)
- HASH : select a hash function to differentiate configuration (default == md5sum)
- BUILD_DIR : skip the hash stage, store object files into manually specified directory
%.o object files generated for dynamic library
must be different from those generated for static library.
Due to this difference, %.o were so far only generated for the static library.
The dynamic library was rebuilt from %.c source.
This meant that, for every minor change, the entire dynamic library had to be rebuilt.
This is fixed in this PR :
only the modified %.c source get rebuilt.
There are compilation environments in aarch64 where NEON isn't
available. While these environments could define ZSTD_NO_INTRINSICS,
it's more fail-safe to use the more specific symbol to know if NEON
extensions are available.
__ARM_NEON is the proper symbol, defined in ARM C Language Extensions
Release 2.1 (https://developer.arm.com/documentation/ihi0053/d/). Some
sources suggest __ARM_NEON__, but that's the obsolete spelling from
prior versions of the standard.
Signed-off-by: Warner Losh <imp@bsdimp.com>
The problem occurs in this scenario:
1. We find a synchronization point.
2. We attmept to create the job.
3. We fail because the job table is full: `mtctx->nextJobID > mtctx->doneJobID + mtctx->jobIDMask`.
4. We call `ZSTDMT_compressStream_generic` again.
5. We forget that we're at a sync point already, and we continue looking
for the next sync point.
This fix is to detect if we're currently paused at a sync point, and if
we are then don't load any more input.
Caught by zstreamtest. I modified it to make the bug occur more often
(~1/100K -> ~1/200) and verified that it is fixed after. I then ran a
few hundred thousand unmodified zstreamtest iterations to verify.
When zstdmt cannot get a buffer and `ZSTD_e_end` is passed an empty
compression job can be created. Additionally, `mtctx->frameEnded` can be
set to 1, which could potentially cause problems like unterminated blocks.
The fix is to adjust to `ZSTD_e_flush` even when we can't get a buffer.
This commit leaves only the functions used by zstd_compress.c. All other
functions have been removed from the API. The ZSTDMT unit tests in
fuzzer.c and zstreamtest.c have been rewritten to use the ZSTD API. And
the --mt zstreamtest tests have been ripped out.
Simplifies the code and removes blocking from zstdmt.
At this point we could completely delete
`ZSTDMT_compress_advanced_internal()`. However I'm leaving it in because
I think we want to do that in the zstd-1.5.0 release, in case anyone is
still using the ZSTDMT API, even though it is not installed by default.
Fixes#2327.
Pass in the `ZSTD_cParamMode_e` to select how we define our cparams.
Based on the mode we either take the `dictSize` into account or we set
it to `0`. See the documentation for `ZSTD_cParamMode_e`.
Some of the modes currently share the same behavior. But they have
distinct modes because they are drastically different cases. E.g.
compression + reprocessing the dictionary and creating a cdict.
Additionally, when downsizing the hashLog and chainLog take the
(adjusted) dictionary size into account, since the size of the
dictionary gets added onto the window size.
Adds a simple test to ensure that we aren't downsizing too far.
The DDS structure can't be copied into the working tables like the DMS.
So it doesn't need to account for the source size when sizing its
parameters, just the dictionary size.