short-tests-0 were silently failing. I think because of the && make clean construction. Switch to ; instead.
Also fix all the test failures that were exposed.
`make all` is failing on CircleCI because it is missing Docker. Move that test
to GitHub actions, and switch the pedantic CircleCI test to `make allmost`.
51 MB seems excessive for CI storate and considering the nature of the test.
(note : maybe we should consider using `/tmp` for files generated during tests,
as tmpfs is typically using RAM, thus preserving storage.)
Tests that libzstd.so doesn't have the exec-stack bit set using
readelf. If the stack is marked executable systemd will refuse
to link against zstd. We now test that it isn't set on every PR.
Adds a test for PR #2857
Fixes Issue #2865
Allow the `dictContentSize` to be any size. The finalized dictionary
content size must be at least as large as the maximum repcode (8). So we
add zero bytes to the dictionary to ensure that we meet that
requirement.
I've removed this restriction because its been causing us headaches when
people complain that dictionary training failed. It fails because there
isn't enough useful content to put in the dictionary. Either because
every sample is exactly the same and less than ZDICT_CONTENTSIZE_MIN bytes,
or there isn't enough content. Instead, we should succeed in creating
the dictionary, and it is up to the user to decide if it is worthwhile.
It is possible that the tables alone provide enough value.
NOTE: This allows us to produce dictionaries with finalized
`dictContentSize < ZDICT_CONTENTSIZE_MIN`. But, they are still valid
zstd dictionaries. We could remove the `ZDICT_CONTENTSIZE_MIN` macro,
but I've decided to leave that for now, so we don't break users.
Fix underflow of `nbCompares` by switching to an `int` and comparing
`nbCompares > 0`. This is a minimal fix, because I don't want to change
the logic. These loops seem to be doing `nbCompares + 1` comparisons.
The bug was reported by Dan Carpenter and found by Smatch static
checker.
https://lore.kernel.org/all/20211008063704.GA5370@kili/
* Extract out common portion of `lib/Makefile` into `lib/libzstd.mk`.
Most relevantly, the way we find library files.
* Use `lib/libzstd.mk` in the other Makefiles instead of repeating the
same code.
* Add a test `tests/test-variants.sh` that checks that the builds of
`make -C programs allVariants` are correct, and run it in Actions.
* Adds support for ASM files in the CMake build.
The Meson build is not updated because it lists every file in zstd,
and supports ASM off the bat, so the Huffman ASM commit will just add
the ASM file to the list.
The Visual Studios build is not updated because I'm not adding ASM
support to Visual Studios yet.
better for large files, and sources with relatively "stable" entropy,
like silesia.tar.
slightly worse for files with rapidly changing entropy,
like Calgary.tar/.
Updated small files tests in fuzzer
As a library, the default shouldn't be to write anything on console.
`cover` and `fastcover` have a `g_displayLevel` variable to control this behavior.
It's now set to 0 (no display) by default.
Setting notification to a higher level should be an explicit operation by a console application.
It's a bit strange, because this is hitting the dictionary special case where
the dictionary is contiguous with the input and still runs in the single-
segment path.
We should probably change that to hit the `extDict` path instead?
This was ~30mn, by far the longest run on travisCI.
That's because it re-analyzes multiple times the same files (library files notably).
It also performs actions that make no sense for the static analyzer purpose,
such as building the single-file library.
Reduced time spent in this test by reducing its scope :
just build the CLI, and obviously the library along it.
These are the only ones that really deserve to be analyzed.
Unfortunately, it still results in a number of false positives when using newer versions of scanbuild
(each version of scanbuild generates a different list of false positives).
These will have to be fixed before transfering to Github Actions.
by allowing parallel build of units,
and reducing optimization levels.
Parallel build is only effective on "recent" versions of `zstd`,
as previously, the list of units was passed as a list of source files,
which is something neither `make` nor `gcc` can parallelize.
So its impact is mildly effective (-20%).
Reducing optimization level to `-O1` makes compilation much faster.
It also makes runtime slower,
but in this test, compilation time dominates run time.
The savings are very significant (-50%).
On my test system, it reduces the length of this test from 13mn to 5mn.
* Add a Huffman round trip fuzzer
* Fix two minor bugs in Huffman that aren't exposed in zstd
- Incorrect weight comparison (weights are allowed to be equal to
table log).
- HUF_compress1X_usingCTable_internal() can return compressed
size >= source size, so the assert that `cSize <= 65535` isn't
correct, and it needs to be checked instead.
This patch is supposed to improve compatibility with less featured tar variants
"when the tar program used does not support historical options (without hyphen) nor the '-z' option."
Patch proposed by Antonio Diaz Diaz
Compress the input twice in the `simple_round_trip` and
`dictionary_round_trip` fuzzers with exactly the same parameters, but
reusing the context. Then ensure that the compressed output is
identical.
This flag forces zstd to always load the prefix in ext-dict mode, even
if it happens to be contiguous, to force determinism. It also applies to
dictionaries that are re-processed.
A determinism test case is also added, which fails without
`ZSTD_c_deterministicRefPrefix` and passes with it set.
Question: Should this be the default behavior? It isn't in this PR.
`open()`'s mode bits are only applied to files that are created by the call.
If the output file already exists, but is not readable, the `fopen()` would
fail, preventing us from removing it, which would mean that the file would
not end up with the correct permission bits.
It's not clear to me why the `fopen()` is there at all. `UTIL_isRegularFile()`
should be sufficient, AFAICT.
previous lower limit was 1 MB.
Note : by default, the lowest job size is 2 MB, achieved at level 1.
Even lower job sizes can be achieved by manipulating this value directly,
or manually modifying window sizes to lower amounts.
Updated unit test to ensure that this new limit works fine
(test would fail with previous 1 MB limit).
Switch from `-T0` to the default `-T1` which significantly reduces
memory usage for level 19 when there are many cores. This fixes
32-bit issues of running out of address space.
Fixes#2603.
* Fix overflow correction when `windowLog < cycleLog`. Previously, we
got the correction wrong in this case, and our chain tables and binary
trees would be corrupted. Now, we work as long as `maxDist` is a power
of two, by adding `MAX(maxDist, cycleSize)` to our indices.
* When `ZSTD_WINDOW_OVERFLOW_CORRECT_FREQUENTLY` is defined to non-zero
run overflow correction as frequently as allowed without impacting
compression ratio.
* Enable `ZSTD_WINDOW_OVERFLOW_CORRECT_FREQUENTLY` in `fuzzer` and
`zstreamtest` as well as all the OSS-Fuzz fuzzers. This has a 5-10%
speed penalty at most, which seems reasonable.
* Switch to yearless copyright per FB policy
* Fix up SPDX-License-Identifier lines in `contrib/linux-kernel` sources
* Add zstd copyright/license header to the `contrib/linux-kernel` sources
* Update the `tests/test-license.py` to check for yearless copyright
* Improvements to `tests/test-license.py`
* Check `contrib/linux-kernel` in `tests/test-license.py`
Fixes the update from PR #2508. I had accidentally forgotten to rebuild
the library, and the regression test suite isn't hooked up to the new
fancy build system yet.
I've double checked that the results are deterministic.
9f327c02fd changed the compression method
for LDM, so the results are slightly different.
I've re-tested LDM on some larger inputs and everything seems fine.
These ratio changes just seem to be noise. There is generally a 0.01%
swing in ratio, sometimes better sometimes worse, but never large.
* Fix compiler version regex, which was broken for multi-digit
versions.
* Fix compiler detection for gcc.
* Disable `pointer-overflow` instead of `integer-overflow` for gcc
versions newer than 8.0.0.
The simple compression functions are intended to ignore the advanced
parameters, but they were accidentally using them. All the
`ZSTD_parameters` were set correctly, but any extra parameters were
used as-is. E.g. `ZSTD_c_format`.
This PR makes all the simple single-pass functions listed below ignore
the advanced parameters, as intended.
* `ZSTD_compressCCtx()`
* `ZSTD_compress_usingDict()`
* `ZSTD_compress_usingCDict()`
* `ZSTD_compress_advanced()`
* `ZSTD_compress_usingCDict_advanced()`
It also adds a test case that ensures that each of these functions
ignore the advanced parameters.
Treat ZSTD_getCParams() and ZSTD_adjustCParams() in the same way
we treat streaming compression. Choose parameters based on the
dictionary size + source size, and assume the source size is small
if unkown. But, don't shrink the window log down in
ZSTD_adjustCParams_internal().
Fixes#2442.
1. When creating a dictionary keep the same behavior as before.
Assume the source size is 513 bytes when adjusting parameters.
2. When calling ZSTD_getCParams() or ZSTD_adjustCParams() keep
the same behavior as before.
3. When attaching a dictionary keep the same behavior of ignoring
the dictionary size. When streaming this will select the
largest parameters and not adjust them down. But, the CDict
will use the correctly sized parameters, which seems like the
right tradeoff.
4. When not attaching a dictionary (either forced not to, or
using a prefix dictionary) we select parameters based on the
dictionary size + source size, and assume the source size is
small, which is the same behavior as before. But, now we don't
adjust the window log (and hash and chain log) down when the
source size is unknown.
When the source size is unknown all cdicts should attach, except
when the user disables attaching, or `forceWindow` is used. This
means that when streaming with a CDict we end up in the good case
where we get small CDict parameters, and large source parameters.
TODO: Add a streaming + dictionary regression test case.
* Add a test that runs without a pledgedSrcSize and with a dictionary.
* Add github.tar data with uses the github dictionary while compressing
github.tar, instead of each file individually.