[test] Add new CLI testing platform
Adds the new CLI testing platform that I'm proposing. See the added `README.md` for details.
This commit is contained in:
parent
f088c430e3
commit
f3096ff6d1
4
tests/cli-tests/.gitignore
vendored
Normal file
4
tests/cli-tests/.gitignore
vendored
Normal file
@ -0,0 +1,4 @@
|
|||||||
|
scratch/
|
||||||
|
!bin/
|
||||||
|
!datagen
|
||||||
|
!zstdcat
|
248
tests/cli-tests/README.md
Normal file
248
tests/cli-tests/README.md
Normal file
@ -0,0 +1,248 @@
|
|||||||
|
# CLI tests
|
||||||
|
|
||||||
|
The CLI tests are focused on testing the zstd CLI.
|
||||||
|
They are intended to be simple tests that the CLI and arguments work as advertised.
|
||||||
|
They are not intended to test the library, only the code in `programs/`.
|
||||||
|
The library will get incidental coverage, but if you find yourself trying to trigger a specific condition in the library, this is the wrong tool.
|
||||||
|
|
||||||
|
## Test runner usage
|
||||||
|
|
||||||
|
The test runner `run.py` will run tests against the in-tree build of `zstd` and `datagen` by default. Which means that `zstd` and `datagen` must be built.
|
||||||
|
|
||||||
|
The `zstd` binary used can be passed with `--zstd /path/to/zstd`.
|
||||||
|
Additionally, to run `zstd` through a tool like `valgrind` or `qemu`, set the `--exec-prefix 'valgrind -q'` flag.
|
||||||
|
|
||||||
|
Similarly, the `--datagen`, and `--zstdgrep` flags can be set to specify
|
||||||
|
the paths to their respective binaries. However, these tools do not use
|
||||||
|
the `EXEC_PREFIX`.
|
||||||
|
|
||||||
|
Each test executes in its own scratch directory under `scratch/test/name`. E.g. `scratch/basic/help.sh/`. Normally these directories are removed after the test executes. However, the `--preserve` flag will preserve these directories after execution, and save the tests exit code, stdout, and stderr in the scratch directory to `exit`, `stderr`, and `stdout` respectively. This can be useful for debugging/editing a test and updating the expected output.
|
||||||
|
|
||||||
|
### Running all the tests
|
||||||
|
|
||||||
|
By default the test runner `run.py` will run all the tests, and report the results.
|
||||||
|
|
||||||
|
Examples:
|
||||||
|
|
||||||
|
```
|
||||||
|
./run.py
|
||||||
|
./run.py --preserve
|
||||||
|
./run.py --zstd ../../build/programs/zstd --datagen ../../build/tests/datagen
|
||||||
|
```
|
||||||
|
|
||||||
|
### Running specific tests
|
||||||
|
|
||||||
|
A set of test names can be passed to the test runner `run.py` to only execute those tests.
|
||||||
|
This can be useful for writing or debugging a test, especially with `--preserve`.
|
||||||
|
|
||||||
|
The test name can either be the path to the test file, or the test name, which is the path relative to the test directory.
|
||||||
|
|
||||||
|
Examples:
|
||||||
|
|
||||||
|
```
|
||||||
|
./run.py basic/help.sh
|
||||||
|
./run.py --preserve basic/help.sh basic/version.sh
|
||||||
|
./run.py --preserve --verbose basic/help.sh
|
||||||
|
```
|
||||||
|
|
||||||
|
## Writing a test
|
||||||
|
|
||||||
|
Test cases are arbitrary executables, and can be written in any language, but are generally shell scripts.
|
||||||
|
After the script executes, the exit code, stderr, and stdout are compared against the expectations.
|
||||||
|
|
||||||
|
Each test is run in a clean directory that the test can use for intermediate files. This directory will be cleaned up at the end of the test, unless `--preserve` is passed to the test runner. Additionally, the `setup` script can prepare the directory before the test runs.
|
||||||
|
|
||||||
|
### Calling zstd, utilities, and environment variables
|
||||||
|
|
||||||
|
The `$PATH` for tests is prepended with the `bin/` sub-directory, which contains helper scripts for ease of testing.
|
||||||
|
The `zstd` binary will call the zstd binary specified by `run.py` with the correct `$EXEC_PREFIX`.
|
||||||
|
Similarly, `datagen`, `unzstd`, `zstdgrep`, `zstdcat`, etc, are provided.
|
||||||
|
|
||||||
|
Helper utilities like `cmp_size`, `println`, and `die` are provided here too. See their scripts for details.
|
||||||
|
|
||||||
|
Common shell script libraries are provided under `common/`, with helper variables and functions. They can be sourced with `source "$COMMON/library.sh`.
|
||||||
|
|
||||||
|
Lastly, environment variables are provided for testing, which can be listed when calling `run.py` with `--verbose`.
|
||||||
|
They are generally used by the helper scripts in `bin/` to coordinate everything.
|
||||||
|
|
||||||
|
### Basic test case
|
||||||
|
|
||||||
|
When executing your `$TEST` executable, by default the exit code is expected to be `0`. However, you can provide an alterate expected exit code in a `$TEST.exit` file.
|
||||||
|
|
||||||
|
When executing your `$TEST` exectuable, by default the expected stderr and stdout are empty. However, you can override the default by providing one of three files:
|
||||||
|
|
||||||
|
* `$TEST.{stdout,stderr}.exact`
|
||||||
|
* `$TEST.{stdout,stderr}.glob`
|
||||||
|
* `$TEST.{stdout,stderr}.ignore`
|
||||||
|
|
||||||
|
If you provide a `.exact` file, the output is expected to exactly match, byte-for-byte.
|
||||||
|
|
||||||
|
If you provide a `.glob` file, the output is expected to match the expected file, where each line is interpreted as a glob syntax. Additionally, a line containing only `...` matches all lines until the next expected line matches.
|
||||||
|
|
||||||
|
If you provide a `.ignore` file, the output is ignored.
|
||||||
|
|
||||||
|
#### Passing examples
|
||||||
|
|
||||||
|
All these examples pass.
|
||||||
|
|
||||||
|
Exit 1, and change the expectation to be 1.
|
||||||
|
|
||||||
|
```
|
||||||
|
exit-1.sh
|
||||||
|
---
|
||||||
|
#!/bin/sh
|
||||||
|
exit 1
|
||||||
|
---
|
||||||
|
|
||||||
|
exit-1.sh.exit
|
||||||
|
---
|
||||||
|
1
|
||||||
|
---
|
||||||
|
```
|
||||||
|
|
||||||
|
Check the stdout output exactly matches.
|
||||||
|
|
||||||
|
```
|
||||||
|
echo.sh
|
||||||
|
---
|
||||||
|
#!/bin/sh
|
||||||
|
echo "hello world"
|
||||||
|
---
|
||||||
|
|
||||||
|
echo.sh.stdout.exact
|
||||||
|
---
|
||||||
|
hello world
|
||||||
|
---
|
||||||
|
```
|
||||||
|
|
||||||
|
Check the stderr output using a glob.
|
||||||
|
|
||||||
|
```
|
||||||
|
random.sh
|
||||||
|
---
|
||||||
|
#!/bin/sh
|
||||||
|
head -c 10 < /dev/urandom | xxd >&2
|
||||||
|
---
|
||||||
|
|
||||||
|
random.sh.stderr.glob
|
||||||
|
---
|
||||||
|
00000000: * * * * * *
|
||||||
|
```
|
||||||
|
|
||||||
|
Multiple lines can be matched with ...
|
||||||
|
|
||||||
|
```
|
||||||
|
random-num-lines.sh
|
||||||
|
---
|
||||||
|
#!/bin/sh
|
||||||
|
echo hello
|
||||||
|
seq 0 $RANDOM
|
||||||
|
echo world
|
||||||
|
---
|
||||||
|
|
||||||
|
random-num-lines.sh.stdout.glob
|
||||||
|
---
|
||||||
|
hello
|
||||||
|
0
|
||||||
|
...
|
||||||
|
world
|
||||||
|
---
|
||||||
|
```
|
||||||
|
|
||||||
|
#### Failing examples
|
||||||
|
|
||||||
|
Exit code is expected to be 0, but is 1.
|
||||||
|
|
||||||
|
```
|
||||||
|
exit-1.sh
|
||||||
|
---
|
||||||
|
#!/bin/sh
|
||||||
|
exit 1
|
||||||
|
---
|
||||||
|
```
|
||||||
|
|
||||||
|
Stdout is expected to be empty, but isn't.
|
||||||
|
|
||||||
|
```
|
||||||
|
echo.sh
|
||||||
|
---
|
||||||
|
#!/bin/sh
|
||||||
|
echo hello world
|
||||||
|
```
|
||||||
|
|
||||||
|
Stderr is expected to be hello but is world.
|
||||||
|
|
||||||
|
```
|
||||||
|
hello.sh
|
||||||
|
---
|
||||||
|
#!/bin/sh
|
||||||
|
echo world >&2
|
||||||
|
---
|
||||||
|
|
||||||
|
hello.sh.stderr.exact
|
||||||
|
---
|
||||||
|
hello
|
||||||
|
---
|
||||||
|
```
|
||||||
|
|
||||||
|
### Setup & teardown scripts
|
||||||
|
|
||||||
|
Finally, test writing can be eased with setup and teardown scripts.
|
||||||
|
Each directory in the test directory is a test-suite consisting of all tests within that directory (but not sub-directories).
|
||||||
|
This test suite can come with 4 scripts to help test writing:
|
||||||
|
|
||||||
|
* `setup_once`
|
||||||
|
* `teardown_once`
|
||||||
|
* `setup`
|
||||||
|
* `teardown`
|
||||||
|
|
||||||
|
The `setup_once` and `teardown_once` are run once before and after all the tests in the suite respectively.
|
||||||
|
They operate in the scratch directory for the test suite, which is the parent directory of each scratch directory for each test case.
|
||||||
|
They can do work that is shared between tests to improve test efficiency.
|
||||||
|
For example, the `dictionaries/setup_once` script builds several dictionaries, for use in the `dictionaries` tests.
|
||||||
|
|
||||||
|
The `setup` and `teardown` scripts run before and after each test case respectively, in the test case's scratch directory.
|
||||||
|
These scripts can do work that is shared between test cases to make tests more succinct.
|
||||||
|
For example, the `dictionaries/setup` script copies the dictionaries built by the `dictionaries/setup_once` script into the test's scratch directory, to make them easier to use, and make sure they aren't accidentally modified.
|
||||||
|
|
||||||
|
#### Examples
|
||||||
|
|
||||||
|
```
|
||||||
|
basic/setup
|
||||||
|
---
|
||||||
|
#!/bin/sh
|
||||||
|
# Create some files for testing with
|
||||||
|
datagen > file
|
||||||
|
datagen > file0
|
||||||
|
datagen > file1
|
||||||
|
---
|
||||||
|
|
||||||
|
basic/test.sh
|
||||||
|
---
|
||||||
|
#!/bin/sh
|
||||||
|
zstd file file0 file1
|
||||||
|
---
|
||||||
|
|
||||||
|
dictionaries/setup_once
|
||||||
|
---
|
||||||
|
#!/bin/sh
|
||||||
|
set -e
|
||||||
|
|
||||||
|
mkdir files/ dicts/
|
||||||
|
for i in $(seq 10); do
|
||||||
|
datagen -g1000 > files/$i
|
||||||
|
done
|
||||||
|
|
||||||
|
zstd --train -r files/ -o dicts/0
|
||||||
|
---
|
||||||
|
|
||||||
|
dictionaries/setup
|
||||||
|
---
|
||||||
|
#!/bin/sh
|
||||||
|
|
||||||
|
# Runs in the test case's scratch directory.
|
||||||
|
# The test suite's scratch directory that
|
||||||
|
# `setup_once` operates in is the parent directory.
|
||||||
|
cp -r ../files ../dicts .
|
||||||
|
---
|
||||||
|
```
|
7
tests/cli-tests/basic/help.sh
Executable file
7
tests/cli-tests/basic/help.sh
Executable file
@ -0,0 +1,7 @@
|
|||||||
|
#!/bin/sh -e
|
||||||
|
println "+ zstd -h"
|
||||||
|
zstd -h
|
||||||
|
println "+ zstd -H"
|
||||||
|
zstd -H
|
||||||
|
println "+ zstd --help"
|
||||||
|
zstd --help
|
34
tests/cli-tests/basic/help.sh.stdout.glob
Normal file
34
tests/cli-tests/basic/help.sh.stdout.glob
Normal file
@ -0,0 +1,34 @@
|
|||||||
|
+ zstd -h
|
||||||
|
*** zstd command line interface *-bits v1.*.*, by Yann Collet ***
|
||||||
|
Usage :
|
||||||
|
zstd *args* *FILE(s)* *-o file*
|
||||||
|
|
||||||
|
FILE : a filename
|
||||||
|
with no FILE, or when FILE is - , read standard input
|
||||||
|
Arguments :
|
||||||
|
-# : # compression level*
|
||||||
|
-d : decompression
|
||||||
|
-D DICT: use DICT as Dictionary for compression or decompression
|
||||||
|
-o file: result stored into `file` (only 1 output file)
|
||||||
|
-f : disable input and output checks. Allows overwriting existing files,
|
||||||
|
input from console, output to stdout, operating on links,
|
||||||
|
block devices, etc.
|
||||||
|
--rm : remove source file(s) after successful de/compression
|
||||||
|
-k : preserve source file(s) (default)
|
||||||
|
-h/-H : display help/long help and exit
|
||||||
|
|
||||||
|
Advanced arguments :
|
||||||
|
-V : display Version number and exit
|
||||||
|
...
|
||||||
|
+ zstd -H
|
||||||
|
...
|
||||||
|
Arguments :
|
||||||
|
...
|
||||||
|
Advanced arguments :
|
||||||
|
...
|
||||||
|
+ zstd --help
|
||||||
|
...
|
||||||
|
Arguments :
|
||||||
|
...
|
||||||
|
Advanced arguments :
|
||||||
|
...
|
3
tests/cli-tests/basic/version.sh
Executable file
3
tests/cli-tests/basic/version.sh
Executable file
@ -0,0 +1,3 @@
|
|||||||
|
#!/bin/sh -e
|
||||||
|
zstd -V
|
||||||
|
zstd --version
|
2
tests/cli-tests/basic/version.sh.stdout.glob
Normal file
2
tests/cli-tests/basic/version.sh.stdout.glob
Normal file
@ -0,0 +1,2 @@
|
|||||||
|
*** zstd command line interface *-bits v1.*.*, by Yann Collet ***
|
||||||
|
*** zstd command line interface *-bits v1.*.*, by Yann Collet ***
|
46
tests/cli-tests/bin/cmp_size
Executable file
46
tests/cli-tests/bin/cmp_size
Executable file
@ -0,0 +1,46 @@
|
|||||||
|
#!/bin/sh
|
||||||
|
|
||||||
|
# Small utility to
|
||||||
|
|
||||||
|
set -e
|
||||||
|
|
||||||
|
usage()
|
||||||
|
{
|
||||||
|
printf "USAGE:\n\t$0 [-eq|-ne|-lt|-le|-gt|-ge] FILE1 FILE2\n"
|
||||||
|
}
|
||||||
|
|
||||||
|
help()
|
||||||
|
{
|
||||||
|
printf "Small utility to compare file sizes without printing them with set -x.\n\n"
|
||||||
|
usage
|
||||||
|
}
|
||||||
|
|
||||||
|
case "$1" in
|
||||||
|
-h) help; exit 0 ;;
|
||||||
|
--help) help; exit 0 ;;
|
||||||
|
esac
|
||||||
|
|
||||||
|
if ! test -f $2; then
|
||||||
|
printf "FILE1='%b' is not a file\n\n" "$2"
|
||||||
|
usage
|
||||||
|
exit 1
|
||||||
|
fi
|
||||||
|
|
||||||
|
if ! test -f $3; then
|
||||||
|
printf "FILE2='%b' is not a file\n\n" "$3"
|
||||||
|
usage
|
||||||
|
exit 1
|
||||||
|
fi
|
||||||
|
|
||||||
|
|
||||||
|
size1=$(wc -c < $2)
|
||||||
|
size2=$(wc -c < $3)
|
||||||
|
|
||||||
|
case "$1" in
|
||||||
|
-eq) [ "$size1" -eq "$size2" ] ;;
|
||||||
|
-ne) [ "$size1" -ne "$size2" ] ;;
|
||||||
|
-lt) [ "$size1" -lt "$size2" ] ;;
|
||||||
|
-le) [ "$size1" -le "$size2" ] ;;
|
||||||
|
-gt) [ "$size1" -gt "$size2" ] ;;
|
||||||
|
-ge) [ "$size1" -ge "$size2" ] ;;
|
||||||
|
esac
|
3
tests/cli-tests/bin/datagen
Executable file
3
tests/cli-tests/bin/datagen
Executable file
@ -0,0 +1,3 @@
|
|||||||
|
#!/bin/sh
|
||||||
|
|
||||||
|
"$DATAGEN_BIN" $@
|
4
tests/cli-tests/bin/die
Executable file
4
tests/cli-tests/bin/die
Executable file
@ -0,0 +1,4 @@
|
|||||||
|
#!/bin/sh
|
||||||
|
|
||||||
|
println "${*}" 1>&2
|
||||||
|
exit 1
|
2
tests/cli-tests/bin/println
Executable file
2
tests/cli-tests/bin/println
Executable file
@ -0,0 +1,2 @@
|
|||||||
|
#!/bin/env sh
|
||||||
|
printf '%b\n' "${*}"
|
1
tests/cli-tests/bin/unzstd
Symbolic link
1
tests/cli-tests/bin/unzstd
Symbolic link
@ -0,0 +1 @@
|
|||||||
|
zstd
|
7
tests/cli-tests/bin/zstd
Executable file
7
tests/cli-tests/bin/zstd
Executable file
@ -0,0 +1,7 @@
|
|||||||
|
#!/bin/sh
|
||||||
|
|
||||||
|
if [ -z "$EXEC_PREFIX" ]; then
|
||||||
|
"$ZSTD_BIN" $@
|
||||||
|
else
|
||||||
|
$EXEC_PREFIX "$ZSTD_BIN" $@
|
||||||
|
fi
|
1
tests/cli-tests/bin/zstdcat
Symbolic link
1
tests/cli-tests/bin/zstdcat
Symbolic link
@ -0,0 +1 @@
|
|||||||
|
zstd
|
2
tests/cli-tests/bin/zstdgrep
Executable file
2
tests/cli-tests/bin/zstdgrep
Executable file
@ -0,0 +1,2 @@
|
|||||||
|
#!/bin/sh
|
||||||
|
"$ZSTDGREP_BIN" $@
|
19
tests/cli-tests/common/format.sh
Normal file
19
tests/cli-tests/common/format.sh
Normal file
@ -0,0 +1,19 @@
|
|||||||
|
#!/bin/sh
|
||||||
|
|
||||||
|
source "$COMMON/platform.sh"
|
||||||
|
|
||||||
|
zstd_supports_format()
|
||||||
|
{
|
||||||
|
zstd -h | grep > $INTOVOID -- "--format=$1"
|
||||||
|
}
|
||||||
|
|
||||||
|
format_extension()
|
||||||
|
{
|
||||||
|
if [ "$1" = "zstd" ]; then
|
||||||
|
printf "zst"
|
||||||
|
elif [ "$1" = "gzip" ]; then
|
||||||
|
printf "gz"
|
||||||
|
else
|
||||||
|
printf "$1"
|
||||||
|
fi
|
||||||
|
}
|
13
tests/cli-tests/common/mtime.sh
Normal file
13
tests/cli-tests/common/mtime.sh
Normal file
@ -0,0 +1,13 @@
|
|||||||
|
source "$COMMON/platform.sh"
|
||||||
|
|
||||||
|
MTIME="stat -c %Y"
|
||||||
|
case "$UNAME" in
|
||||||
|
Darwin | FreeBSD | OpenBSD | NetBSD) MTIME="stat -f %m" ;;
|
||||||
|
esac
|
||||||
|
|
||||||
|
assertSameMTime() {
|
||||||
|
MT1=$($MTIME "$1")
|
||||||
|
MT2=$($MTIME "$2")
|
||||||
|
echo MTIME $MT1 $MT2
|
||||||
|
[ "$MT1" = "$MT2" ] || die "mtime on $1 doesn't match mtime on $2 ($MT1 != $MT2)"
|
||||||
|
}
|
18
tests/cli-tests/common/permissions.sh
Normal file
18
tests/cli-tests/common/permissions.sh
Normal file
@ -0,0 +1,18 @@
|
|||||||
|
source "$COMMON/platform.sh"
|
||||||
|
|
||||||
|
GET_PERMS="stat -c %a"
|
||||||
|
case "$UNAME" in
|
||||||
|
Darwin | FreeBSD | OpenBSD | NetBSD) GET_PERMS="stat -f %Lp" ;;
|
||||||
|
esac
|
||||||
|
|
||||||
|
assertFilePermissions() {
|
||||||
|
STAT1=$($GET_PERMS "$1")
|
||||||
|
STAT2=$2
|
||||||
|
[ "$STAT1" = "$STAT2" ] || die "permissions on $1 don't match expected ($STAT1 != $STAT2)"
|
||||||
|
}
|
||||||
|
|
||||||
|
assertSamePermissions() {
|
||||||
|
STAT1=$($GET_PERMS "$1")
|
||||||
|
STAT2=$($GET_PERMS "$2")
|
||||||
|
[ "$STAT1" = "$STAT2" ] || die "permissions on $1 don't match those on $2 ($STAT1 != $STAT2)"
|
||||||
|
}
|
37
tests/cli-tests/common/platform.sh
Normal file
37
tests/cli-tests/common/platform.sh
Normal file
@ -0,0 +1,37 @@
|
|||||||
|
#!/bin/sh
|
||||||
|
|
||||||
|
UNAME=$(uname)
|
||||||
|
|
||||||
|
isWindows=false
|
||||||
|
INTOVOID="/dev/null"
|
||||||
|
case "$UNAME" in
|
||||||
|
GNU) DEVDEVICE="/dev/random" ;;
|
||||||
|
*) DEVDEVICE="/dev/zero" ;;
|
||||||
|
esac
|
||||||
|
case "$OS" in
|
||||||
|
Windows*)
|
||||||
|
isWindows=true
|
||||||
|
INTOVOID="NUL"
|
||||||
|
DEVDEVICE="NUL"
|
||||||
|
;;
|
||||||
|
esac
|
||||||
|
|
||||||
|
case "$UNAME" in
|
||||||
|
Darwin) MD5SUM="md5 -r" ;;
|
||||||
|
FreeBSD) MD5SUM="gmd5sum" ;;
|
||||||
|
NetBSD) MD5SUM="md5 -n" ;;
|
||||||
|
OpenBSD) MD5SUM="md5" ;;
|
||||||
|
*) MD5SUM="md5sum" ;;
|
||||||
|
esac
|
||||||
|
|
||||||
|
DIFF="diff"
|
||||||
|
case "$UNAME" in
|
||||||
|
SunOS) DIFF="gdiff" ;;
|
||||||
|
esac
|
||||||
|
|
||||||
|
if echo hello | zstd -v -T2 2>&1 > $INTOVOID | grep -q 'multi-threading is disabled'
|
||||||
|
then
|
||||||
|
hasMT=""
|
||||||
|
else
|
||||||
|
hasMT="true"
|
||||||
|
fi
|
6
tests/cli-tests/compression/adapt.sh
Executable file
6
tests/cli-tests/compression/adapt.sh
Executable file
@ -0,0 +1,6 @@
|
|||||||
|
#!/bin/sh
|
||||||
|
|
||||||
|
set -e
|
||||||
|
|
||||||
|
# Test --adapt
|
||||||
|
zstd -f file --adapt -c | zstd -t
|
28
tests/cli-tests/compression/basic.sh
Executable file
28
tests/cli-tests/compression/basic.sh
Executable file
@ -0,0 +1,28 @@
|
|||||||
|
#!/bin/sh -e
|
||||||
|
|
||||||
|
# Uncomment the set -x line for debugging
|
||||||
|
# set -x
|
||||||
|
|
||||||
|
# Test compression flags and check that they work
|
||||||
|
zstd file ; zstd -t file.zst
|
||||||
|
zstd -f file ; zstd -t file.zst
|
||||||
|
zstd -f -z file ; zstd -t file.zst
|
||||||
|
zstd -f -k file ; zstd -t file.zst
|
||||||
|
zstd -f -C file ; zstd -t file.zst
|
||||||
|
zstd -f --check file ; zstd -t file.zst
|
||||||
|
zstd -f --no-check file ; zstd -t file.zst
|
||||||
|
zstd -f -- file ; zstd -t file.zst
|
||||||
|
|
||||||
|
# Test output file compression
|
||||||
|
zstd -o file-out.zst ; zstd -t file-out.zst
|
||||||
|
zstd -fo file-out.zst; zstd -t file-out.zst
|
||||||
|
|
||||||
|
# Test compression to stdout
|
||||||
|
zstd -c file | zstd -t
|
||||||
|
zstd --stdout file | zstd -t
|
||||||
|
println bob | zstd | zstd -t
|
||||||
|
|
||||||
|
# Test --rm
|
||||||
|
cp file file-rm
|
||||||
|
zstd --rm file-rm; zstd -t file-rm.zst
|
||||||
|
test ! -f file-rm
|
10
tests/cli-tests/compression/compress-literals.sh
Executable file
10
tests/cli-tests/compression/compress-literals.sh
Executable file
@ -0,0 +1,10 @@
|
|||||||
|
#!/bin/sh
|
||||||
|
|
||||||
|
set -e
|
||||||
|
|
||||||
|
# Test --[no-]compress-literals
|
||||||
|
zstd file --no-compress-literals -1 -c | zstd -t
|
||||||
|
zstd file --no-compress-literals -19 -c | zstd -t
|
||||||
|
zstd file --no-compress-literals --fast=1 -c | zstd -t
|
||||||
|
zstd file --compress-literals -1 -c | zstd -t
|
||||||
|
zstd file --compress-literals --fast=1 -c | zstd -t
|
16
tests/cli-tests/compression/format.sh
Executable file
16
tests/cli-tests/compression/format.sh
Executable file
@ -0,0 +1,16 @@
|
|||||||
|
#!/bin/sh
|
||||||
|
|
||||||
|
source "$COMMON/format.sh"
|
||||||
|
|
||||||
|
set -e
|
||||||
|
|
||||||
|
# Test --format
|
||||||
|
zstd --format=zstd file -f
|
||||||
|
zstd -t file.zst
|
||||||
|
for format in "gzip" "lz4" "xz" "lzma"; do
|
||||||
|
if zstd_supports_format $format; then
|
||||||
|
zstd --format=$format file
|
||||||
|
zstd -t file.$(format_extension $format)
|
||||||
|
zstd -c --format=$format file | zstd -t --format=$format
|
||||||
|
fi
|
||||||
|
done
|
64
tests/cli-tests/compression/levels.sh
Executable file
64
tests/cli-tests/compression/levels.sh
Executable file
@ -0,0 +1,64 @@
|
|||||||
|
#!/bin/sh
|
||||||
|
|
||||||
|
set -e
|
||||||
|
set -x
|
||||||
|
|
||||||
|
datagen > file
|
||||||
|
|
||||||
|
# Compress with various levels and ensure that their sizes are ordered
|
||||||
|
zstd --fast=10 file -o file-f10.zst
|
||||||
|
zstd --fast=1 file -o file-f1.zst
|
||||||
|
zstd -1 file -o file-1.zst
|
||||||
|
zstd -19 file -o file-19.zst
|
||||||
|
zstd -22 --ultra file -o file-22.zst
|
||||||
|
|
||||||
|
zstd -t file-{f10,f1,1,19,22}.zst
|
||||||
|
|
||||||
|
cmp_size -ne file-19.zst file-22.zst
|
||||||
|
cmp_size -lt file-19.zst file-1.zst
|
||||||
|
cmp_size -lt file-1.zst file-f1.zst
|
||||||
|
cmp_size -lt file-f1.zst file-f10.zst
|
||||||
|
|
||||||
|
# Test default levels
|
||||||
|
zstd --fast file -f
|
||||||
|
cmp file.zst file-f1.zst || die "--fast is not level -1"
|
||||||
|
|
||||||
|
zstd -0 file -o file-0.zst
|
||||||
|
zstd -f file
|
||||||
|
cmp file.zst file-0.zst || die "Level 0 is not the default level"
|
||||||
|
|
||||||
|
# Test level clamping
|
||||||
|
zstd -99 file -o file-99.zst
|
||||||
|
cmp file-19.zst file-99.zst || die "Level 99 is clamped to 19"
|
||||||
|
zstd --fast=200000 file -c | zstd -t
|
||||||
|
|
||||||
|
zstd -5000000000 -f file && die "Level too large, must fail" ||:
|
||||||
|
zstd --fast=5000000000 -f file && die "Level too large, must fail" ||:
|
||||||
|
|
||||||
|
# Test setting a level through the environment variable
|
||||||
|
ZSTD_CLEVEL=-10 zstd file -o file-f10-env.zst
|
||||||
|
ZSTD_CLEVEL=1 zstd file -o file-1-env.zst
|
||||||
|
ZSTD_CLEVEL=+19 zstd file -o file-19-env.zst
|
||||||
|
ZSTD_CLEVEL=+99 zstd file -o file-99-env.zst
|
||||||
|
|
||||||
|
cmp file-f10{,-env}.zst || die "Environment variable failed to set level"
|
||||||
|
cmp file-1{,-env}.zst || die "Environment variable failed to set level"
|
||||||
|
cmp file-19{,-env}.zst || die "Environment variable failed to set level"
|
||||||
|
cmp file-99{,-env}.zst || die "Environment variable failed to set level"
|
||||||
|
|
||||||
|
# Test invalid environment clevel is the default level
|
||||||
|
zstd -f file
|
||||||
|
ZSTD_CLEVEL=- zstd -f file -o file-env.zst ; cmp file.zst file-env.zst
|
||||||
|
ZSTD_CLEVEL=+ zstd -f file -o file-env.zst ; cmp file.zst file-env.zst
|
||||||
|
ZSTD_CLEVEL=a zstd -f file -o file-env.zst ; cmp file.zst file-env.zst
|
||||||
|
ZSTD_CLEVEL=-a zstd -f file -o file-env.zst ; cmp file.zst file-env.zst
|
||||||
|
ZSTD_CLEVEL=+a zstd -f file -o file-env.zst ; cmp file.zst file-env.zst
|
||||||
|
ZSTD_CLEVEL=3a7 zstd -f file -o file-env.zst ; cmp file.zst file-env.zst
|
||||||
|
ZSTD_CLEVEL=5000000000 zstd -f file -o file-env.zst; cmp file.zst file-env.zst
|
||||||
|
|
||||||
|
# Test environment clevel is overridden by command line
|
||||||
|
ZSTD_CLEVEL=10 zstd -f file -1 -o file-1-env.zst
|
||||||
|
ZSTD_CLEVEL=10 zstd -f file --fast=1 -o file-f1-env.zst
|
||||||
|
|
||||||
|
cmp file-1{,-env}.zst || die "Environment variable not overridden"
|
||||||
|
cmp file-f1{,-env}.zst || die "Environment variable not overridden"
|
75
tests/cli-tests/compression/levels.sh.stderr.exact
Normal file
75
tests/cli-tests/compression/levels.sh.stderr.exact
Normal file
@ -0,0 +1,75 @@
|
|||||||
|
+ datagen
|
||||||
|
+ zstd --fast=10 file -o file-f10.zst
|
||||||
|
+ zstd --fast=1 file -o file-f1.zst
|
||||||
|
+ zstd -1 file -o file-1.zst
|
||||||
|
+ zstd -19 file -o file-19.zst
|
||||||
|
+ zstd -22 --ultra file -o file-22.zst
|
||||||
|
+ zstd -t file-f10.zst file-f1.zst file-1.zst file-19.zst file-22.zst
|
||||||
|
+ cmp_size -ne file-19.zst file-22.zst
|
||||||
|
+ cmp_size -lt file-19.zst file-1.zst
|
||||||
|
+ cmp_size -lt file-1.zst file-f1.zst
|
||||||
|
+ cmp_size -lt file-f1.zst file-f10.zst
|
||||||
|
+ zstd --fast file -f
|
||||||
|
+ cmp file.zst file-f1.zst
|
||||||
|
+ zstd -0 file -o file-0.zst
|
||||||
|
+ zstd -f file
|
||||||
|
+ cmp file.zst file-0.zst
|
||||||
|
+ zstd -99 file -o file-99.zst
|
||||||
|
Warning : compression level higher than max, reduced to 19
|
||||||
|
+ cmp file-19.zst file-99.zst
|
||||||
|
+ zstd --fast=200000 file -c
|
||||||
|
+ zstd -t
|
||||||
|
+ zstd -5000000000 -f file
|
||||||
|
error: numeric value overflows 32-bit unsigned int
|
||||||
|
+ :
|
||||||
|
+ zstd --fast=5000000000 -f file
|
||||||
|
error: numeric value overflows 32-bit unsigned int
|
||||||
|
+ :
|
||||||
|
+ ZSTD_CLEVEL=-10
|
||||||
|
+ zstd file -o file-f10-env.zst
|
||||||
|
+ ZSTD_CLEVEL=1
|
||||||
|
+ zstd file -o file-1-env.zst
|
||||||
|
+ ZSTD_CLEVEL=+19
|
||||||
|
+ zstd file -o file-19-env.zst
|
||||||
|
+ ZSTD_CLEVEL=+99
|
||||||
|
+ zstd file -o file-99-env.zst
|
||||||
|
Warning : compression level higher than max, reduced to 19
|
||||||
|
+ cmp file-f10.zst file-f10-env.zst
|
||||||
|
+ cmp file-1.zst file-1-env.zst
|
||||||
|
+ cmp file-19.zst file-19-env.zst
|
||||||
|
+ cmp file-99.zst file-99-env.zst
|
||||||
|
+ zstd -f file
|
||||||
|
+ ZSTD_CLEVEL=-
|
||||||
|
+ zstd -f file -o file-env.zst
|
||||||
|
Ignore environment variable setting ZSTD_CLEVEL=-: not a valid integer value
|
||||||
|
+ cmp file.zst file-env.zst
|
||||||
|
+ ZSTD_CLEVEL=+
|
||||||
|
+ zstd -f file -o file-env.zst
|
||||||
|
Ignore environment variable setting ZSTD_CLEVEL=+: not a valid integer value
|
||||||
|
+ cmp file.zst file-env.zst
|
||||||
|
+ ZSTD_CLEVEL=a
|
||||||
|
+ zstd -f file -o file-env.zst
|
||||||
|
Ignore environment variable setting ZSTD_CLEVEL=a: not a valid integer value
|
||||||
|
+ cmp file.zst file-env.zst
|
||||||
|
+ ZSTD_CLEVEL=-a
|
||||||
|
+ zstd -f file -o file-env.zst
|
||||||
|
Ignore environment variable setting ZSTD_CLEVEL=-a: not a valid integer value
|
||||||
|
+ cmp file.zst file-env.zst
|
||||||
|
+ ZSTD_CLEVEL=+a
|
||||||
|
+ zstd -f file -o file-env.zst
|
||||||
|
Ignore environment variable setting ZSTD_CLEVEL=+a: not a valid integer value
|
||||||
|
+ cmp file.zst file-env.zst
|
||||||
|
+ ZSTD_CLEVEL=3a7
|
||||||
|
+ zstd -f file -o file-env.zst
|
||||||
|
Ignore environment variable setting ZSTD_CLEVEL=3a7: not a valid integer value
|
||||||
|
+ cmp file.zst file-env.zst
|
||||||
|
+ ZSTD_CLEVEL=5000000000
|
||||||
|
+ zstd -f file -o file-env.zst
|
||||||
|
Ignore environment variable setting ZSTD_CLEVEL=5000000000: numeric value too large
|
||||||
|
+ cmp file.zst file-env.zst
|
||||||
|
+ ZSTD_CLEVEL=10
|
||||||
|
+ zstd -f file -1 -o file-1-env.zst
|
||||||
|
+ ZSTD_CLEVEL=10
|
||||||
|
+ zstd -f file --fast=1 -o file-f1-env.zst
|
||||||
|
+ cmp file-1.zst file-1-env.zst
|
||||||
|
+ cmp file-f1.zst file-f1-env.zst
|
7
tests/cli-tests/compression/long-distance-matcher.sh
Executable file
7
tests/cli-tests/compression/long-distance-matcher.sh
Executable file
@ -0,0 +1,7 @@
|
|||||||
|
#!/bin/sh
|
||||||
|
|
||||||
|
set -e
|
||||||
|
|
||||||
|
# Test --long
|
||||||
|
zstd -f file --long ; zstd -t file.zst
|
||||||
|
zstd -f file --long=20; zstd -t file.zst
|
11
tests/cli-tests/compression/multi-threaded.sh
Executable file
11
tests/cli-tests/compression/multi-threaded.sh
Executable file
@ -0,0 +1,11 @@
|
|||||||
|
#!/bin/sh
|
||||||
|
|
||||||
|
set -e
|
||||||
|
|
||||||
|
# Test multi-threaded flags
|
||||||
|
zstd --single-thread file -f ; zstd -t file.zst
|
||||||
|
zstd -T2 -f file ; zstd -t file.zst
|
||||||
|
zstd --rsyncable -f file ; zstd -t file.zst
|
||||||
|
zstd -T0 -f file ; zstd -t file.zst
|
||||||
|
zstd -T0 --auto-threads=logical -f file ; zstd -t file.zst
|
||||||
|
zstd -T0 --auto-threads=physical -f file; zstd -t file.zst
|
7
tests/cli-tests/compression/row-match-finder.sh
Executable file
7
tests/cli-tests/compression/row-match-finder.sh
Executable file
@ -0,0 +1,7 @@
|
|||||||
|
#!/bin/sh
|
||||||
|
|
||||||
|
set -e
|
||||||
|
|
||||||
|
# Test --[no-]row-match-finder
|
||||||
|
zstd file -7f --row-match-finder
|
||||||
|
zstd file -7f --no-row-match-finder
|
7
tests/cli-tests/compression/setup
Executable file
7
tests/cli-tests/compression/setup
Executable file
@ -0,0 +1,7 @@
|
|||||||
|
#!/bin/sh
|
||||||
|
|
||||||
|
set -e
|
||||||
|
|
||||||
|
datagen > file
|
||||||
|
datagen > file0
|
||||||
|
datagen > file1
|
7
tests/cli-tests/compression/stream-size.sh
Executable file
7
tests/cli-tests/compression/stream-size.sh
Executable file
@ -0,0 +1,7 @@
|
|||||||
|
#!/bin/sh
|
||||||
|
|
||||||
|
set -e
|
||||||
|
|
||||||
|
# Test stream size & hint
|
||||||
|
datagen -g7654 | zstd --stream-size=7654 | zstd -t
|
||||||
|
datagen -g7654 | zstd --size-hint=7000 | zstd -t
|
3
tests/cli-tests/dict-builder/no-inputs
Executable file
3
tests/cli-tests/dict-builder/no-inputs
Executable file
@ -0,0 +1,3 @@
|
|||||||
|
#!/bin/sh
|
||||||
|
set -x
|
||||||
|
zstd --train
|
1
tests/cli-tests/dict-builder/no-inputs.exit
Normal file
1
tests/cli-tests/dict-builder/no-inputs.exit
Normal file
@ -0,0 +1 @@
|
|||||||
|
14
|
5
tests/cli-tests/dict-builder/no-inputs.stderr.exact
Normal file
5
tests/cli-tests/dict-builder/no-inputs.stderr.exact
Normal file
@ -0,0 +1,5 @@
|
|||||||
|
+ zstd --train
|
||||||
|
! Warning : nb of samples too low for proper processing !
|
||||||
|
! Please provide _one file per sample_.
|
||||||
|
! Alternatively, split files into fixed-size blocks representative of samples, with -B#
|
||||||
|
Error 14 : nb of samples too low
|
29
tests/cli-tests/dictionaries/dictionary-mismatch.sh
Executable file
29
tests/cli-tests/dictionaries/dictionary-mismatch.sh
Executable file
@ -0,0 +1,29 @@
|
|||||||
|
#!/bin/sh
|
||||||
|
|
||||||
|
source "$COMMON/platform.sh"
|
||||||
|
|
||||||
|
set -e
|
||||||
|
|
||||||
|
if [ false ]; then
|
||||||
|
for seed in $(seq 100); do
|
||||||
|
datagen -g1000 -s$seed > file$seed
|
||||||
|
done
|
||||||
|
|
||||||
|
zstd --train -r . -o dict0 -qq
|
||||||
|
|
||||||
|
for seed in $(seq 101 200); do
|
||||||
|
datagen -g1000 -s$seed > file$seed
|
||||||
|
done
|
||||||
|
|
||||||
|
zstd --train -r . -o dict1 -qq
|
||||||
|
|
||||||
|
[ "$($MD5SUM < dict0)" != "$($MD5SUM < dict1)" ] || die "dictionaries must not match"
|
||||||
|
|
||||||
|
datagen -g1000 -s0 > file0
|
||||||
|
fi
|
||||||
|
|
||||||
|
set -x
|
||||||
|
zstd files/0 -D dicts/0
|
||||||
|
zstd -t files/0.zst -D dicts/0
|
||||||
|
zstd -t files/0.zst -D dicts/1 && die "Must fail" ||:
|
||||||
|
zstd -t files/0.zst && die "Must fail" ||:
|
@ -0,0 +1,8 @@
|
|||||||
|
+ zstd files/0 -D dicts/0
|
||||||
|
+ zstd -t files/0.zst -D dicts/0
|
||||||
|
+ zstd -t files/0.zst -D dicts/1
|
||||||
|
files/0.zst : Decoding error (36) : Dictionary mismatch
|
||||||
|
+ :
|
||||||
|
+ zstd -t files/0.zst
|
||||||
|
files/0.zst : Decoding error (36) : Dictionary mismatch
|
||||||
|
+ :
|
6
tests/cli-tests/dictionaries/setup
Executable file
6
tests/cli-tests/dictionaries/setup
Executable file
@ -0,0 +1,6 @@
|
|||||||
|
#!/bin/sh
|
||||||
|
|
||||||
|
set -e
|
||||||
|
|
||||||
|
cp -r ../files .
|
||||||
|
cp -r ../dicts .
|
24
tests/cli-tests/dictionaries/setup_once
Executable file
24
tests/cli-tests/dictionaries/setup_once
Executable file
@ -0,0 +1,24 @@
|
|||||||
|
#!/bin/sh
|
||||||
|
|
||||||
|
set -e
|
||||||
|
|
||||||
|
source "$COMMON/platform.sh"
|
||||||
|
|
||||||
|
|
||||||
|
mkdir files/ dicts/
|
||||||
|
|
||||||
|
for seed in $(seq 50); do
|
||||||
|
datagen -g1000 -s$seed > files/$seed
|
||||||
|
done
|
||||||
|
|
||||||
|
zstd --train -r files -o dicts/0 -qq
|
||||||
|
|
||||||
|
for seed in $(seq 51 100); do
|
||||||
|
datagen -g1000 -s$seed > files/$seed
|
||||||
|
done
|
||||||
|
|
||||||
|
zstd --train -r files -o dicts/1 -qq
|
||||||
|
|
||||||
|
cmp dicts/0 dicts/1 && die "dictionaries must not match!"
|
||||||
|
|
||||||
|
datagen -g1000 > files/0
|
687
tests/cli-tests/run.py
Executable file
687
tests/cli-tests/run.py
Executable file
@ -0,0 +1,687 @@
|
|||||||
|
#!/usr/bin/env python3
|
||||||
|
# ################################################################
|
||||||
|
# Copyright (c) Facebook, Inc.
|
||||||
|
# All rights reserved.
|
||||||
|
#
|
||||||
|
# This source code is licensed under both the BSD-style license (found in the
|
||||||
|
# LICENSE file in the root directory of this source tree) and the GPLv2 (found
|
||||||
|
# in the COPYING file in the root directory of this source tree).
|
||||||
|
# You may select, at your option, one of the above-listed licenses.
|
||||||
|
# ##########################################################################
|
||||||
|
|
||||||
|
import argparse
|
||||||
|
import contextlib
|
||||||
|
import copy
|
||||||
|
import fnmatch
|
||||||
|
import os
|
||||||
|
import shutil
|
||||||
|
import subprocess
|
||||||
|
import sys
|
||||||
|
import tempfile
|
||||||
|
import typing
|
||||||
|
|
||||||
|
|
||||||
|
EXCLUDED_DIRS = {
|
||||||
|
"bin",
|
||||||
|
"common",
|
||||||
|
"scratch",
|
||||||
|
}
|
||||||
|
|
||||||
|
|
||||||
|
EXCLUDED_BASENAMES = {
|
||||||
|
"setup",
|
||||||
|
"setup_once",
|
||||||
|
"teardown",
|
||||||
|
"teardown_once",
|
||||||
|
"README.md",
|
||||||
|
"run.py",
|
||||||
|
".gitignore",
|
||||||
|
}
|
||||||
|
|
||||||
|
EXCLUDED_SUFFIXES = [
|
||||||
|
".exact",
|
||||||
|
".glob",
|
||||||
|
".ignore",
|
||||||
|
".exit",
|
||||||
|
]
|
||||||
|
|
||||||
|
|
||||||
|
def exclude_dir(dirname: str) -> bool:
|
||||||
|
"""
|
||||||
|
Should files under the directory :dirname: be excluded from the test runner?
|
||||||
|
"""
|
||||||
|
if dirname in EXCLUDED_DIRS:
|
||||||
|
return True
|
||||||
|
return False
|
||||||
|
|
||||||
|
|
||||||
|
def exclude_file(filename: str) -> bool:
|
||||||
|
"""Should the file :filename: be excluded from the test runner?"""
|
||||||
|
if filename in EXCLUDED_BASENAMES:
|
||||||
|
return True
|
||||||
|
for suffix in EXCLUDED_SUFFIXES:
|
||||||
|
if filename.endswith(suffix):
|
||||||
|
return True
|
||||||
|
return False
|
||||||
|
|
||||||
|
def read_file(filename: str) -> bytes:
|
||||||
|
"""Reads the file :filename: and returns the contents as bytes."""
|
||||||
|
with open(filename, "rb") as f:
|
||||||
|
return f.read()
|
||||||
|
|
||||||
|
|
||||||
|
def diff(a: bytes, b: bytes) -> str:
|
||||||
|
"""Returns a diff between two different byte-strings :a: and :b:."""
|
||||||
|
assert a != b
|
||||||
|
with tempfile.NamedTemporaryFile("wb") as fa:
|
||||||
|
fa.write(a)
|
||||||
|
fa.flush()
|
||||||
|
with tempfile.NamedTemporaryFile("wb") as fb:
|
||||||
|
fb.write(b)
|
||||||
|
fb.flush()
|
||||||
|
|
||||||
|
diff_bytes = subprocess.run(["diff", fa.name, fb.name], stdout=subprocess.PIPE, stderr=subprocess.DEVNULL).stdout
|
||||||
|
return diff_bytes.decode("utf8")
|
||||||
|
|
||||||
|
|
||||||
|
def pop_line(data: bytes) -> typing.Tuple[typing.Optional[bytes], bytes]:
|
||||||
|
"""
|
||||||
|
Pop the first line from :data: and returns the first line and the remainder
|
||||||
|
of the data as a tuple. If :data: is empty, returns :(None, data):. Otherwise
|
||||||
|
the first line always ends in a :\n:, even if it is the last line and :data:
|
||||||
|
doesn't end in :\n:.
|
||||||
|
"""
|
||||||
|
NEWLINE = b"\n"[0]
|
||||||
|
|
||||||
|
if data == b'':
|
||||||
|
return (None, data)
|
||||||
|
|
||||||
|
newline_idx = data.find(b"\n")
|
||||||
|
if newline_idx == -1:
|
||||||
|
end_idx = len(data)
|
||||||
|
else:
|
||||||
|
end_idx = newline_idx + 1
|
||||||
|
|
||||||
|
line = data[:end_idx]
|
||||||
|
data = data[end_idx:]
|
||||||
|
|
||||||
|
assert len(line) != 0
|
||||||
|
if line[-1] != NEWLINE:
|
||||||
|
line += NEWLINE
|
||||||
|
|
||||||
|
return (line, data)
|
||||||
|
|
||||||
|
|
||||||
|
def glob_line_matches(actual: bytes, expect: bytes) -> bool:
|
||||||
|
"""
|
||||||
|
Does the `actual` line match the expected glob line `expect`?
|
||||||
|
"""
|
||||||
|
return fnmatch.fnmatchcase(actual.strip(), expect.strip())
|
||||||
|
|
||||||
|
|
||||||
|
def glob_diff(actual: bytes, expect: bytes) -> bytes:
|
||||||
|
"""
|
||||||
|
Returns None if the :actual: content matches the expected glob :expect:,
|
||||||
|
otherwise returns the diff bytes.
|
||||||
|
"""
|
||||||
|
diff = b''
|
||||||
|
actual_line, actual = pop_line(actual)
|
||||||
|
expect_line, expect = pop_line(expect)
|
||||||
|
while True:
|
||||||
|
# Handle end of file conditions - allow extra newlines
|
||||||
|
while expect_line is None and actual_line == b"\n":
|
||||||
|
actual_line, actual = pop_line(actual)
|
||||||
|
while actual_line is None and expect_line == b"\n":
|
||||||
|
expect_line, expect = pop_line(expect)
|
||||||
|
|
||||||
|
if expect_line is None and actual_line is None:
|
||||||
|
if diff == b'':
|
||||||
|
return None
|
||||||
|
return diff
|
||||||
|
elif expect_line is None:
|
||||||
|
diff += b"---\n"
|
||||||
|
while actual_line != None:
|
||||||
|
diff += b"> "
|
||||||
|
diff += actual_line
|
||||||
|
actual_line, actual = pop_line(actual)
|
||||||
|
return diff
|
||||||
|
elif actual_line is None:
|
||||||
|
diff += b"---\n"
|
||||||
|
while expect_line != None:
|
||||||
|
diff += b"< "
|
||||||
|
diff += expect_line
|
||||||
|
expect_line, expect = pop_line(expect)
|
||||||
|
return diff
|
||||||
|
|
||||||
|
assert expect_line is not None
|
||||||
|
assert actual_line is not None
|
||||||
|
|
||||||
|
if expect_line == b'...\n':
|
||||||
|
next_expect_line, expect = pop_line(expect)
|
||||||
|
if next_expect_line is None:
|
||||||
|
if diff == b'':
|
||||||
|
return None
|
||||||
|
return diff
|
||||||
|
while not glob_line_matches(actual_line, next_expect_line):
|
||||||
|
actual_line, actual = pop_line(actual)
|
||||||
|
if actual_line is None:
|
||||||
|
diff += b"---\n"
|
||||||
|
diff += b"< "
|
||||||
|
diff += next_expect_line
|
||||||
|
return diff
|
||||||
|
expect_line = next_expect_line
|
||||||
|
continue
|
||||||
|
|
||||||
|
if not glob_line_matches(actual_line, expect_line):
|
||||||
|
diff += b'---\n'
|
||||||
|
diff += b'< ' + expect_line
|
||||||
|
diff += b'> ' + actual_line
|
||||||
|
|
||||||
|
actual_line, actual = pop_line(actual)
|
||||||
|
expect_line, expect = pop_line(expect)
|
||||||
|
|
||||||
|
|
||||||
|
class Options:
|
||||||
|
"""Options configuring how to run a :TestCase:."""
|
||||||
|
def __init__(
|
||||||
|
self,
|
||||||
|
env: typing.Dict[str, str],
|
||||||
|
timeout: typing.Optional[int],
|
||||||
|
verbose: bool,
|
||||||
|
preserve: bool,
|
||||||
|
scratch_dir: str,
|
||||||
|
test_dir: str,
|
||||||
|
) -> None:
|
||||||
|
self.env = env
|
||||||
|
self.timeout = timeout
|
||||||
|
self.verbose = verbose
|
||||||
|
self.preserve = preserve
|
||||||
|
self.scratch_dir = scratch_dir
|
||||||
|
self.test_dir = test_dir
|
||||||
|
|
||||||
|
|
||||||
|
class TestCase:
|
||||||
|
"""
|
||||||
|
Logic and state related to running a single test case.
|
||||||
|
|
||||||
|
1. Initialize the test case.
|
||||||
|
2. Launch the test case with :TestCase.launch():.
|
||||||
|
This will start the test execution in a subprocess, but
|
||||||
|
not wait for completion. So you could launch multiple test
|
||||||
|
cases in parallel. This will now print any test output.
|
||||||
|
3. Analyze the results with :TestCase.analyze():. This will
|
||||||
|
join the test subprocess, check the results against the
|
||||||
|
expectations, and print the results to stdout.
|
||||||
|
|
||||||
|
:TestCase.run(): is also provided which combines the launch & analyze
|
||||||
|
steps for single-threaded use-cases.
|
||||||
|
|
||||||
|
All other methods, prefixed with _, are private helper functions.
|
||||||
|
"""
|
||||||
|
def __init__(self, test_filename: str, options: Options) -> None:
|
||||||
|
"""
|
||||||
|
Initialize the :TestCase: for the test located in :test_filename:
|
||||||
|
with the given :options:.
|
||||||
|
"""
|
||||||
|
self._opts = options
|
||||||
|
self._test_file = test_filename
|
||||||
|
self._test_name = os.path.normpath(
|
||||||
|
os.path.relpath(test_filename, start=self._opts.test_dir)
|
||||||
|
)
|
||||||
|
self._success = {}
|
||||||
|
self._message = {}
|
||||||
|
self._test_stdin = None
|
||||||
|
self._scratch_dir = os.path.abspath(os.path.join(self._opts.scratch_dir, self._test_name))
|
||||||
|
|
||||||
|
@property
|
||||||
|
def name(self) -> str:
|
||||||
|
"""Returns the unique name for the test."""
|
||||||
|
return self._test_name
|
||||||
|
|
||||||
|
def launch(self) -> None:
|
||||||
|
"""
|
||||||
|
Launch the test case as a subprocess, but do not block on completion.
|
||||||
|
This allows users to run multiple tests in parallel. Results aren't yet
|
||||||
|
printed out.
|
||||||
|
"""
|
||||||
|
self._launch_test()
|
||||||
|
|
||||||
|
def analyze(self) -> bool:
|
||||||
|
"""
|
||||||
|
Must be called after :TestCase.launch():. Joins the test subprocess and
|
||||||
|
checks the results against expectations. Finally prints the results to
|
||||||
|
stdout and returns the success.
|
||||||
|
"""
|
||||||
|
self._join_test()
|
||||||
|
self._check_exit()
|
||||||
|
self._check_stderr()
|
||||||
|
self._check_stdout()
|
||||||
|
self._analyze_results()
|
||||||
|
return self._succeeded
|
||||||
|
|
||||||
|
def run(self) -> bool:
|
||||||
|
"""Shorthand for combining both :TestCase.launch(): and :TestCase.analyze():."""
|
||||||
|
self.launch()
|
||||||
|
return self.analyze()
|
||||||
|
|
||||||
|
def _log(self, *args, **kwargs) -> None:
|
||||||
|
"""Logs test output."""
|
||||||
|
print(file=sys.stdout, *args, **kwargs)
|
||||||
|
|
||||||
|
def _vlog(self, *args, **kwargs) -> None:
|
||||||
|
"""Logs verbose test output."""
|
||||||
|
if self._opts.verbose:
|
||||||
|
print(file=sys.stdout, *args, **kwargs)
|
||||||
|
|
||||||
|
def _test_environment(self) -> typing.Dict[str, str]:
|
||||||
|
"""
|
||||||
|
Returns the environment to be used for the
|
||||||
|
test subprocess.
|
||||||
|
"""
|
||||||
|
env = copy.copy(os.environ)
|
||||||
|
for k, v in self._opts.env.items():
|
||||||
|
self._vlog(f"${k}='{v}'")
|
||||||
|
env[k] = v
|
||||||
|
|
||||||
|
def _launch_test(self) -> None:
|
||||||
|
"""Launch the test subprocess, but do not join it."""
|
||||||
|
args = [os.path.abspath(self._test_file)]
|
||||||
|
stdin_name = f"{self._test_file}.stdin"
|
||||||
|
if os.path.exists(stdin_name):
|
||||||
|
self._test_stdin = open(stdin_name, "rb")
|
||||||
|
stdin = self._test_stdin
|
||||||
|
else:
|
||||||
|
stdin = subprocess.DEVNULL
|
||||||
|
cwd = self._scratch_dir
|
||||||
|
env = self._test_environment()
|
||||||
|
self._test_process = subprocess.Popen(
|
||||||
|
args=args,
|
||||||
|
stdin=stdin,
|
||||||
|
cwd=cwd,
|
||||||
|
env=env,
|
||||||
|
stderr=subprocess.PIPE,
|
||||||
|
stdout=subprocess.PIPE
|
||||||
|
)
|
||||||
|
|
||||||
|
def _join_test(self) -> None:
|
||||||
|
"""Join the test process and save stderr, stdout, and the exit code."""
|
||||||
|
(stdout, stderr) = self._test_process.communicate(timeout=self._opts.timeout)
|
||||||
|
self._output = {}
|
||||||
|
self._output["stdout"] = stdout
|
||||||
|
self._output["stderr"] = stderr
|
||||||
|
self._exit_code = self._test_process.returncode
|
||||||
|
self._test_process = None
|
||||||
|
if self._test_stdin is not None:
|
||||||
|
self._test_stdin.close()
|
||||||
|
self._test_stdin = None
|
||||||
|
|
||||||
|
def _check_output_exact(self, out_name: str, expected: bytes) -> None:
|
||||||
|
"""
|
||||||
|
Check the output named :out_name: for an exact match against the :expected: content.
|
||||||
|
Saves the success and message.
|
||||||
|
"""
|
||||||
|
check_name = f"check_{out_name}"
|
||||||
|
actual = self._output[out_name]
|
||||||
|
if actual == expected:
|
||||||
|
self._success[check_name] = True
|
||||||
|
self._message[check_name] = f"{out_name} matches!"
|
||||||
|
else:
|
||||||
|
self._success[check_name] = False
|
||||||
|
self._message[check_name] = f"{out_name} does not match!\n> diff expected actual\n{diff(expected, actual)}"
|
||||||
|
|
||||||
|
def _check_output_glob(self, out_name: str, expected: bytes) -> None:
|
||||||
|
"""
|
||||||
|
Check the output named :out_name: for a glob match against the :expected: glob.
|
||||||
|
Saves the success and message.
|
||||||
|
"""
|
||||||
|
check_name = f"check_{out_name}"
|
||||||
|
actual = self._output[out_name]
|
||||||
|
diff = glob_diff(actual, expected)
|
||||||
|
if diff is None:
|
||||||
|
self._success[check_name] = True
|
||||||
|
self._message[check_name] = f"{out_name} matches!"
|
||||||
|
else:
|
||||||
|
utf8_diff = diff.decode('utf8')
|
||||||
|
self._success[check_name] = False
|
||||||
|
self._message[check_name] = f"{out_name} does not match!\n> diff expected actual\n{utf8_diff}"
|
||||||
|
|
||||||
|
def _check_output(self, out_name: str) -> None:
|
||||||
|
"""
|
||||||
|
Checks the output named :out_name: for a match against the expectation.
|
||||||
|
We check for a .exact, .glob, and a .ignore file. If none are found we
|
||||||
|
expect that the output should be empty.
|
||||||
|
|
||||||
|
If :Options.preserve: was set then we save the scratch directory and
|
||||||
|
save the stderr, stdout, and exit code to the scratch directory for
|
||||||
|
debugging.
|
||||||
|
"""
|
||||||
|
if self._opts.preserve:
|
||||||
|
# Save the output to the scratch directory
|
||||||
|
actual_name = os.path.join(self._scratch_dir, f"{out_name}")
|
||||||
|
with open(actual_name, "wb") as f:
|
||||||
|
f.write(self._output[out_name])
|
||||||
|
|
||||||
|
exact_name = f"{self._test_file}.{out_name}.exact"
|
||||||
|
glob_name = f"{self._test_file}.{out_name}.glob"
|
||||||
|
ignore_name = f"{self._test_file}.{out_name}.ignore"
|
||||||
|
|
||||||
|
if os.path.exists(exact_name):
|
||||||
|
return self._check_output_exact(out_name, read_file(exact_name))
|
||||||
|
elif os.path.exists(glob_name):
|
||||||
|
return self._check_output_glob(out_name, read_file(glob_name))
|
||||||
|
elif os.path.exists(ignore_name):
|
||||||
|
check_name = f"check_{out_name}"
|
||||||
|
self._success[check_name] = True
|
||||||
|
self._message[check_name] = f"{out_name} ignored!"
|
||||||
|
else:
|
||||||
|
return self._check_output_exact(out_name, bytes())
|
||||||
|
|
||||||
|
def _check_stderr(self) -> None:
|
||||||
|
"""Checks the stderr output against the expectation."""
|
||||||
|
self._check_output("stderr")
|
||||||
|
|
||||||
|
def _check_stdout(self) -> None:
|
||||||
|
"""Checks the stdout output against the expectation."""
|
||||||
|
self._check_output("stdout")
|
||||||
|
|
||||||
|
def _check_exit(self) -> None:
|
||||||
|
"""
|
||||||
|
Checks the exit code against expectations. If a .exit file
|
||||||
|
exists, we expect that the exit code matches the contents.
|
||||||
|
Otherwise we expect the exit code to be zero.
|
||||||
|
|
||||||
|
If :Options.preserve: is set we save the exit code to the
|
||||||
|
scratch directory under the filename "exit".
|
||||||
|
"""
|
||||||
|
if self._opts.preserve:
|
||||||
|
exit_name = os.path.join(self._scratch_dir, "exit")
|
||||||
|
with open(exit_name, "w") as f:
|
||||||
|
f.write(str(self._exit_code) + "\n")
|
||||||
|
exit_name = f"{self._test_file}.exit"
|
||||||
|
if os.path.exists(exit_name):
|
||||||
|
exit_code: int = int(read_file(exit_name))
|
||||||
|
else:
|
||||||
|
exit_code: int = 0
|
||||||
|
if exit_code == self._exit_code:
|
||||||
|
self._success["check_exit"] = True
|
||||||
|
self._message["check_exit"] = "Exit code matches!"
|
||||||
|
else:
|
||||||
|
self._success["check_exit"] = False
|
||||||
|
self._message["check_exit"] = f"Exit code mismatch! Expected {exit_code} but got {self._exit_code}"
|
||||||
|
|
||||||
|
def _analyze_results(self) -> None:
|
||||||
|
"""
|
||||||
|
After all tests have been checked, collect all the successes
|
||||||
|
and messages, and print the results to stdout.
|
||||||
|
"""
|
||||||
|
STATUS = {True: "PASS", False: "FAIL"}
|
||||||
|
checks = sorted(self._success.keys())
|
||||||
|
self._succeeded = all(self._success.values())
|
||||||
|
self._log(f"{STATUS[self._succeeded]}: {self._test_name}")
|
||||||
|
|
||||||
|
if not self._succeeded or self._opts.verbose:
|
||||||
|
for check in checks:
|
||||||
|
if self._opts.verbose or not self._success[check]:
|
||||||
|
self._log(f"{STATUS[self._success[check]]}: {self._test_name}.{check}")
|
||||||
|
self._log(self._message[check])
|
||||||
|
|
||||||
|
self._log("----------------------------------------")
|
||||||
|
|
||||||
|
|
||||||
|
class TestSuite:
|
||||||
|
"""
|
||||||
|
Setup & teardown test suite & cases.
|
||||||
|
This class is intended to be used as a context manager.
|
||||||
|
|
||||||
|
TODO: Make setup/teardown failure emit messages, not throw exceptions.
|
||||||
|
"""
|
||||||
|
def __init__(self, test_directory: str, options: Options) -> None:
|
||||||
|
self._opts = options
|
||||||
|
self._test_dir = os.path.abspath(test_directory)
|
||||||
|
rel_test_dir = os.path.relpath(test_directory, start=self._opts.test_dir)
|
||||||
|
assert not rel_test_dir.startswith(os.path.sep)
|
||||||
|
self._scratch_dir = os.path.normpath(os.path.join(self._opts.scratch_dir, rel_test_dir))
|
||||||
|
|
||||||
|
def __enter__(self) -> 'TestSuite':
|
||||||
|
self._setup_once()
|
||||||
|
return self
|
||||||
|
|
||||||
|
def __exit__(self, _exc_type, _exc_value, _traceback) -> None:
|
||||||
|
self._teardown_once()
|
||||||
|
|
||||||
|
@contextlib.contextmanager
|
||||||
|
def test_case(self, test_basename: str) -> TestCase:
|
||||||
|
"""
|
||||||
|
Context manager for a test case in the test suite.
|
||||||
|
Pass the basename of the test relative to the :test_directory:.
|
||||||
|
"""
|
||||||
|
assert os.path.dirname(test_basename) == ""
|
||||||
|
try:
|
||||||
|
self._setup(test_basename)
|
||||||
|
test_filename = os.path.join(self._test_dir, test_basename)
|
||||||
|
yield TestCase(test_filename, self._opts)
|
||||||
|
finally:
|
||||||
|
self._teardown(test_basename)
|
||||||
|
|
||||||
|
def _remove_scratch_dir(self, dir: str) -> None:
|
||||||
|
"""Helper to remove a scratch directory with sanity checks"""
|
||||||
|
assert "scratch" in dir
|
||||||
|
assert dir.startswith(self._scratch_dir)
|
||||||
|
assert os.path.exists(dir)
|
||||||
|
shutil.rmtree(dir)
|
||||||
|
|
||||||
|
def _setup_once(self) -> None:
|
||||||
|
if os.path.exists(self._scratch_dir):
|
||||||
|
self._remove_scratch_dir(self._scratch_dir)
|
||||||
|
os.makedirs(self._scratch_dir)
|
||||||
|
setup_script = os.path.join(self._test_dir, "setup_once")
|
||||||
|
if os.path.exists(setup_script):
|
||||||
|
self._run_script(setup_script, cwd=self._scratch_dir)
|
||||||
|
|
||||||
|
def _teardown_once(self) -> None:
|
||||||
|
assert os.path.exists(self._scratch_dir)
|
||||||
|
teardown_script = os.path.join(self._test_dir, "teardown_once")
|
||||||
|
if os.path.exists(teardown_script):
|
||||||
|
self._run_script(teardown_script, cwd=self._scratch_dir)
|
||||||
|
if not self._opts.preserve:
|
||||||
|
self._remove_scratch_dir(self._scratch_dir)
|
||||||
|
|
||||||
|
def _setup(self, test_basename: str) -> None:
|
||||||
|
test_scratch_dir = os.path.join(self._scratch_dir, test_basename)
|
||||||
|
assert not os.path.exists(test_scratch_dir)
|
||||||
|
os.makedirs(test_scratch_dir)
|
||||||
|
setup_script = os.path.join(self._test_dir, "setup")
|
||||||
|
if os.path.exists(setup_script):
|
||||||
|
self._run_script(setup_script, cwd=test_scratch_dir)
|
||||||
|
|
||||||
|
def _teardown(self, test_basename: str) -> None:
|
||||||
|
test_scratch_dir = os.path.join(self._scratch_dir, test_basename)
|
||||||
|
assert os.path.exists(test_scratch_dir)
|
||||||
|
teardown_script = os.path.join(self._test_dir, "teardown")
|
||||||
|
if os.path.exists(teardown_script):
|
||||||
|
self._run_script(teardown_script, cwd=test_scratch_dir)
|
||||||
|
if not self._opts.preserve:
|
||||||
|
self._remove_scratch_dir(test_scratch_dir)
|
||||||
|
|
||||||
|
def _run_script(self, script: str, cwd: str) -> None:
|
||||||
|
env = copy.copy(os.environ)
|
||||||
|
for k, v in self._opts.env.items():
|
||||||
|
env[k] = v
|
||||||
|
try:
|
||||||
|
subprocess.run(
|
||||||
|
args=[script],
|
||||||
|
stdin=subprocess.DEVNULL,
|
||||||
|
capture_output=True,
|
||||||
|
cwd=cwd,
|
||||||
|
env=env,
|
||||||
|
check=True,
|
||||||
|
)
|
||||||
|
except subprocess.CalledProcessError as e:
|
||||||
|
print(f"{script} failed with exit code {e.returncode}!")
|
||||||
|
print(f"stderr:\n{e.stderr}")
|
||||||
|
print(f"stdout:\n{e.stdout}")
|
||||||
|
raise
|
||||||
|
|
||||||
|
TestSuites = typing.Dict[str, typing.List[str]]
|
||||||
|
|
||||||
|
def get_all_tests(options: Options) -> TestSuites:
|
||||||
|
"""
|
||||||
|
Find all the test in the test directory and return the test suites.
|
||||||
|
"""
|
||||||
|
test_suites = {}
|
||||||
|
for root, dirs, files in os.walk(options.test_dir, topdown=True):
|
||||||
|
dirs[:] = [d for d in dirs if not exclude_dir(d)]
|
||||||
|
test_cases = []
|
||||||
|
for file in files:
|
||||||
|
if not exclude_file(file):
|
||||||
|
test_cases.append(file)
|
||||||
|
assert root == os.path.normpath(root)
|
||||||
|
test_suites[root] = test_cases
|
||||||
|
return test_suites
|
||||||
|
|
||||||
|
|
||||||
|
def resolve_listed_tests(
|
||||||
|
tests: typing.List[str], options: Options
|
||||||
|
) -> TestSuites:
|
||||||
|
"""
|
||||||
|
Resolve the list of tests passed on the command line into their
|
||||||
|
respective test suites. Tests can either be paths, or test names
|
||||||
|
relative to the test directory.
|
||||||
|
"""
|
||||||
|
test_suites = {}
|
||||||
|
for test in tests:
|
||||||
|
if not os.path.exists(test):
|
||||||
|
test = os.path.join(options.test_dir, test)
|
||||||
|
if not os.path.exists(test):
|
||||||
|
raise RuntimeError(f"Test {test} does not exist!")
|
||||||
|
|
||||||
|
test = os.path.normpath(os.path.abspath(test))
|
||||||
|
assert test.startswith(options.test_dir)
|
||||||
|
test_suite = os.path.dirname(test)
|
||||||
|
test_case = os.path.basename(test)
|
||||||
|
test_suites.setdefault(test_suite, []).append(test_case)
|
||||||
|
|
||||||
|
return test_suites
|
||||||
|
|
||||||
|
def run_tests(test_suites: TestSuites, options: Options) -> bool:
|
||||||
|
"""
|
||||||
|
Runs all the test in the :test_suites: with the given :options:.
|
||||||
|
Prints the results to stdout.
|
||||||
|
"""
|
||||||
|
tests = {}
|
||||||
|
for test_dir, test_files in test_suites.items():
|
||||||
|
with TestSuite(test_dir, options) as test_suite:
|
||||||
|
test_files = sorted(set(test_files))
|
||||||
|
for test_file in test_files:
|
||||||
|
with test_suite.test_case(test_file) as test_case:
|
||||||
|
tests[test_case.name] = test_case.run()
|
||||||
|
|
||||||
|
successes = 0
|
||||||
|
for test, status in tests.items():
|
||||||
|
if status:
|
||||||
|
successes += 1
|
||||||
|
else:
|
||||||
|
print(f"FAIL: {test}")
|
||||||
|
if successes == len(tests):
|
||||||
|
print(f"PASSED all {len(tests)} tests!")
|
||||||
|
return True
|
||||||
|
else:
|
||||||
|
print(f"FAILED {len(tests) - successes} / {len(tests)} tests!")
|
||||||
|
return False
|
||||||
|
|
||||||
|
|
||||||
|
if __name__ == "__main__":
|
||||||
|
CLI_TEST_DIR = os.path.dirname(sys.argv[0])
|
||||||
|
REPO_DIR = os.path.join(CLI_TEST_DIR, "..", "..")
|
||||||
|
PROGRAMS_DIR = os.path.join(REPO_DIR, "programs")
|
||||||
|
TESTS_DIR = os.path.join(REPO_DIR, "tests")
|
||||||
|
ZSTD_PATH = os.path.join(PROGRAMS_DIR, "zstd")
|
||||||
|
ZSTDGREP_PATH = os.path.join(PROGRAMS_DIR, "zstdgrep")
|
||||||
|
DATAGEN_PATH = os.path.join(TESTS_DIR, "datagen")
|
||||||
|
|
||||||
|
parser = argparse.ArgumentParser(
|
||||||
|
(
|
||||||
|
"Runs the zstd CLI tests. Exits nonzero on failure. Default arguments are\n"
|
||||||
|
"generally correct. Pass --preserve to preserve test output for debugging,\n"
|
||||||
|
"and --verbose to get verbose test output.\n"
|
||||||
|
)
|
||||||
|
)
|
||||||
|
parser.add_argument(
|
||||||
|
"--preserve",
|
||||||
|
action="store_true",
|
||||||
|
help="Preserve the scratch directory TEST_DIR/scratch/ for debugging purposes."
|
||||||
|
)
|
||||||
|
parser.add_argument("--verbose", action="store_true", help="Verbose test output.")
|
||||||
|
parser.add_argument("--timeout", default=60, type=int, help="Test case timeout in seconds. Set to 0 to disable timeouts.")
|
||||||
|
parser.add_argument(
|
||||||
|
"--exec-prefix",
|
||||||
|
default=None,
|
||||||
|
help="Sets the EXEC_PREFIX environment variable. Prefix to invocations of the zstd CLI."
|
||||||
|
)
|
||||||
|
parser.add_argument(
|
||||||
|
"--zstd",
|
||||||
|
default=ZSTD_PATH,
|
||||||
|
help="Sets the ZSTD_BIN environment variable. Path of the zstd CLI."
|
||||||
|
)
|
||||||
|
parser.add_argument(
|
||||||
|
"--zstdgrep",
|
||||||
|
default=ZSTDGREP_PATH,
|
||||||
|
help="Sets the ZSTDGREP_BIN environment variable. Path of the zstdgrep CLI."
|
||||||
|
)
|
||||||
|
parser.add_argument(
|
||||||
|
"--datagen",
|
||||||
|
default=DATAGEN_PATH,
|
||||||
|
help="Sets the DATAGEN_BIN environment variable. Path to the datagen CLI."
|
||||||
|
)
|
||||||
|
parser.add_argument(
|
||||||
|
"--test-dir",
|
||||||
|
default=CLI_TEST_DIR,
|
||||||
|
help=(
|
||||||
|
"Runs the tests under this directory. "
|
||||||
|
"Adds TEST_DIR/bin/ to path. "
|
||||||
|
"Scratch directory located in TEST_DIR/scratch/."
|
||||||
|
)
|
||||||
|
)
|
||||||
|
parser.add_argument(
|
||||||
|
"tests",
|
||||||
|
nargs="*",
|
||||||
|
help="Run only these test cases. Can either be paths or test names relative to TEST_DIR/"
|
||||||
|
)
|
||||||
|
args = parser.parse_args()
|
||||||
|
|
||||||
|
if args.timeout <= 0:
|
||||||
|
args.timeout = None
|
||||||
|
|
||||||
|
args.test_dir = os.path.normpath(os.path.abspath(args.test_dir))
|
||||||
|
bin_dir = os.path.join(args.test_dir, "bin")
|
||||||
|
scratch_dir = os.path.join(args.test_dir, "scratch")
|
||||||
|
|
||||||
|
env = {}
|
||||||
|
if args.exec_prefix is not None:
|
||||||
|
env["EXEC_PREFIX"] = args.exec_prefix
|
||||||
|
env["ZSTD_BIN"] = os.path.abspath(args.zstd)
|
||||||
|
env["DATAGEN_BIN"] = os.path.abspath(args.datagen)
|
||||||
|
env["ZSTDGREP_BIN"] = os.path.abspath(args.zstdgrep)
|
||||||
|
env["COMMON"] = os.path.abspath(os.path.join(args.test_dir, "common"))
|
||||||
|
env["PATH"] = os.path.abspath(bin_dir) + ":" + os.getenv("PATH", "")
|
||||||
|
|
||||||
|
opts = Options(
|
||||||
|
env=env,
|
||||||
|
timeout=args.timeout,
|
||||||
|
verbose=args.verbose,
|
||||||
|
preserve=args.preserve,
|
||||||
|
test_dir=args.test_dir,
|
||||||
|
scratch_dir=scratch_dir,
|
||||||
|
)
|
||||||
|
|
||||||
|
if len(args.tests) == 0:
|
||||||
|
tests = get_all_tests(opts)
|
||||||
|
else:
|
||||||
|
tests = resolve_listed_tests(args.tests, opts)
|
||||||
|
|
||||||
|
success = run_tests(tests, opts)
|
||||||
|
if success:
|
||||||
|
sys.exit(0)
|
||||||
|
else:
|
||||||
|
sys.exit(1)
|
||||||
|
|
Loading…
x
Reference in New Issue
Block a user