zstd/contrib/largeNbDicts
Bimba Shrestha 0301ef5d04
[bench] Extending largeNbDicts to compression (#2089)
* adding cdict_collection_t

* adding shuffleCDictionaries()

* adding compressInstructions

* adding compress()

* integrating compression into bench()

* copy paste error fix

* static analyzer uninit value complaint fix

* changing to control

* removing assert

* changing to control

* moving memcpy to seperate function

* fixing static analyzer complaint

* another hacky solution attempt

* Copying createbuffer logic
2020-05-04 10:42:22 -07:00
..
.gitignore first working test program 2018-08-28 15:47:07 -07:00
Makefile benchfn dependencies reduced to only timefn 2019-04-10 12:37:03 -07:00
README.md updated documentation 2018-09-04 14:57:45 -07:00
largeNbDicts.c [bench] Extending largeNbDicts to compression (#2089) 2020-05-04 10:42:22 -07:00

README.md

largeNbDicts

largeNbDicts is a benchmark test tool dedicated to the specific scenario of dictionary decompression using a very large number of dictionaries. When dictionaries are constantly changing, they are always "cold", suffering from increased latency due to cache misses.

The tool is created in a bid to investigate performance for this scenario, and experiment mitigation techniques.

Command line :

largeNbDicts [Options] filename(s)

Options :
-r           : recursively load all files in subdirectories (default: off)
-B#          : split input into blocks of size # (default: no split)
-#           : use compression level # (default: 3)
-D #         : use # as a dictionary (default: create one)
-i#          : nb benchmark rounds (default: 6)
--nbDicts=#  : set nb of dictionaries to # (default: one per block)
-h           : help (this text)