* A copy-paste error made it so we weren't running the advanced/cdict
streaming tests with the old API.
* Clean up the old streaming tests to skip incompatible configs.
* Update `results.csv`.
The tests now catch the bug in #1787.
* Move all ZSTDMT parameter setting code to ZSTD_CCtxParams_*Parameter().
ZSTDMT now calls these functions, so we can keep all the logic in the
same place.
* Clean up `ZSTD_CCtx_setParameter()` to only add extra checks where needed.
* Clean up `ZSTDMT_initJobCCtxParams()` by copying all parameters by default,
and then zeroing the ones that need to be zeroed. We've missed adding several
parameters here, and it makes more sense to only have to update it if you
change something in ZSTDMT.
* Add `ZSTDMT_cParam_clampBounds()` to clamp a parameter into its valid
range. Use this to keep backwards compatibility when setting ZSTDMT parameters,
which clamp into the valid range.
Test a positive compression level with uncompressed literals,
and a negative compression level with compressed literals.
I double checked the `results.csv` and made sure that the compressed
sizes make sense.
* Add configs that test multithreading, LDM, and setting explicit
parameters.
* Update the `compress cctx` method to accept `ZSTD_parameters`.
* Compile against the multithreaded `libzstd.a`.
* Update `results.csv` for the new configs.
Unless you think there are more configs/methods I should test, I think
we have a fairly wide set of configs/methods, so I'll pause adding
more for now.
Dictionaries are prebuilt and saved as part of the data object.
The config decides whether or not to use the dictionary if it is
available. Configs that require dictionaries are only run with
data that have dictionaries. The method will skip configs that are
irrelevant, so for example ZSTD_compress() will skip configs with
dictionaries.
I've also trimmed the silesia source to 1MB per file (12 MB total),
and added 500 samples from the github data set with a dictionary.
I've intentionally added an extra line to the `results.csv` to make
the nightly build fail, so that we can see how CircleCI reports it.
Full list of changes:
* Add pre-built dictionaries to the data.
* Add `use_dictionary` and `no_pledged_src_size` flags to the config.
* Add a config using a dictionary for every level.
* Add a config that specifies no pledged source size.
* Support dictionaries and streaming in the `zstdcli` method.
* Add a context-reuse method using `ZSTD_compressCCtx()`.
* Clean up the formatting of the `results.csv` file to align columns.
* Add `--data`, `--config`, and `--method` flags to constrain each
to a particular value. This is useful for debugging a failure
or debugging a particular config/method/data.
The regression tests run nightly or on the `regression`
branch for convenience. The results get uploaded as the
artifacts of the job. If they change, check the diff
printed in the job. If all is well, download the new
results and commit them to the repo.
This code will only run on a UNIX like platform. It
could be made to run on Windows, but I don't think that
it is necessary. It also uses C99.
* data: This module defines the data to run tests on.
It downloads data from a URL into a cache directory,
checks it against a checksum, and unpacks it. It also
provides helpers for accessing the data.
* config: This module defines the configs to run tests
with. A config is a set of API parameters and a set of
CLI flags.
* result: This module is a helper for method that defines
the result type.
* method: This module defines the compression methods
to test. It is what runs the regression test using the
data and the config. It reports the total compressed
size, or an error/skip.
* test: This is the test binary that runs the tests for
every (data, config, method) tuple, and prints the
results to the output file and stderr.
* results.csv: The results that the current commit is
expected to produce.