From ebbd67599897b8a2510f92e6392cee100df62ee6 Mon Sep 17 00:00:00 2001 From: Dimitris Apostolou Date: Sat, 13 Nov 2021 10:04:04 +0200 Subject: [PATCH] Fix typos --- .github/workflows/dev-short-tests.yml | 2 +- CONTRIBUTING.md | 24 +++++++++++------------ build/single_file_libs/zstd-in.c | 2 +- build/single_file_libs/zstddeclib-in.c | 2 +- doc/educational_decoder/zstd_decompress.c | 2 +- doc/zstd_compression_format.md | 2 +- lib/README.md | 2 +- lib/common/compiler.h | 2 +- lib/compress/huf_compress.c | 4 ++-- lib/compress/zstd_compress.c | 2 +- lib/compress/zstd_compress_internal.h | 2 +- lib/compress/zstd_compress_superblock.c | 2 +- lib/compress/zstd_cwksp.h | 2 +- lib/compress/zstd_ldm.c | 2 +- lib/decompress/huf_decompress.c | 2 +- lib/decompress/huf_decompress_amd64.S | 6 +++--- lib/zdict.h | 4 ++-- lib/zstd.h | 4 ++-- programs/dibio.c | 4 ++-- programs/util.c | 4 ++-- programs/util.h | 2 +- programs/zstd.1 | 2 +- programs/zstd.1.md | 2 +- tests/README.md | 2 +- tests/automated_benchmarking.py | 2 +- tests/fuzzer.c | 4 ++-- tests/paramgrill.c | 8 ++++---- tests/playTests.sh | 2 +- tests/zstreamtest.c | 10 +++++----- zlibWrapper/examples/fitblk.c | 2 +- zlibWrapper/examples/fitblk_original.c | 2 +- 31 files changed, 57 insertions(+), 57 deletions(-) diff --git a/.github/workflows/dev-short-tests.yml b/.github/workflows/dev-short-tests.yml index 8c1e2912..c68fe5ed 100644 --- a/.github/workflows/dev-short-tests.yml +++ b/.github/workflows/dev-short-tests.yml @@ -335,7 +335,7 @@ jobs: # This test currently fails on Github Actions specifically. # Possible reason : TTY emulation. # Note that the same test works fine locally and on travisCI. -# This will have to be fixed before transfering the test to GA. +# This will have to be fixed before transferring the test to GA. # versions-compatibility: # runs-on: ubuntu-latest # steps: diff --git a/CONTRIBUTING.md b/CONTRIBUTING.md index 5effa26e..a936d747 100644 --- a/CONTRIBUTING.md +++ b/CONTRIBUTING.md @@ -47,7 +47,7 @@ Our contribution process works in three main stages: * Topic and development: * Make a new branch on your fork about the topic you're developing for ``` - # branch names should be consise but sufficiently informative + # branch names should be concise but sufficiently informative git checkout -b git push origin ``` @@ -104,7 +104,7 @@ Our contribution process works in three main stages: issue at hand, then please indicate this by requesting that an issue be closed by commenting. * Just because your changes have been merged does not mean the topic or larger issue is complete. Remember that the change must make it to an official zstd release for it to be meaningful. We recommend - that contributers track the activity on their pull request and corresponding issue(s) page(s) until + that contributors track the activity on their pull request and corresponding issue(s) page(s) until their change makes it to the next release of zstd. Users will often discover bugs in your code or suggest ways to refine and improve your initial changes even after the pull request is merged. @@ -270,15 +270,15 @@ for level 1 compression on Zstd. Typically this means, you have identified a sec code that you think can be made to run faster. The first thing you will want to do is make sure that the piece of code is actually taking up -a notable amount of time to run. It is usually not worth optimzing something which accounts for less than +a notable amount of time to run. It is usually not worth optimizing something which accounts for less than 0.0001% of the total running time. Luckily, there are tools to help with this. Profilers will let you see how much time your code spends inside a particular function. -If your target code snippit is only part of a function, it might be worth trying to -isolate that snippit by moving it to its own function (this is usually not necessary but +If your target code snippet is only part of a function, it might be worth trying to +isolate that snippet by moving it to its own function (this is usually not necessary but might be). -Most profilers (including the profilers dicusssed below) will generate a call graph of -functions for you. Your goal will be to find your function of interest in this call grapch +Most profilers (including the profilers discussed below) will generate a call graph of +functions for you. Your goal will be to find your function of interest in this call graph and then inspect the time spent inside of it. You might also want to to look at the annotated assembly which most profilers will provide you with. @@ -301,16 +301,16 @@ $ zstd -b1 -i5 # this will run for 5 seconds 5. Once you run your benchmarking script, switch back over to instruments and attach your process to the time profiler. You can do this by: * Clicking on the `All Processes` drop down in the top left of the toolbar. - * Selecting your process from the dropdown. In my case, it is just going to be labled + * Selecting your process from the dropdown. In my case, it is just going to be labeled `zstd` * Hitting the bright red record circle button on the top left of the toolbar -6. You profiler will now start collecting metrics from your bencharking script. Once +6. You profiler will now start collecting metrics from your benchmarking script. Once you think you have collected enough samples (usually this is the case after 3 seconds of recording), stop your profiler. 7. Make sure that in toolbar of the bottom window, `profile` is selected. 8. You should be able to see your call graph. * If you don't see the call graph or an incomplete call graph, make sure you have compiled - zstd and your benchmarking scripg using debug flags. On mac and linux, this just means + zstd and your benchmarking script using debug flags. On mac and linux, this just means you will have to supply the `-g` flag alone with your build script. You might also have to provide the `-fno-omit-frame-pointer` flag 9. Dig down the graph to find your function call and then inspect it by double clicking @@ -329,7 +329,7 @@ Some general notes on perf: counter statistics. Perf uses a high resolution timer and this is likely one of the first things your team will run when assessing your PR. * Perf has a long list of hardware counters that can be viewed with `perf --list`. -When measuring optimizations, something worth trying is to make sure the handware +When measuring optimizations, something worth trying is to make sure the hardware counters you expect to be impacted by your change are in fact being so. For example, if you expect the L1 cache misses to decrease with your change, you can look at the counter `L1-dcache-load-misses` @@ -368,7 +368,7 @@ Follow these steps to link travis-ci with your github fork of zstd TODO ### appveyor -Follow these steps to link circle-ci with your girhub fork of zstd +Follow these steps to link circle-ci with your github fork of zstd 1. Make sure you are logged into your github account 2. Go to https://www.appveyor.com/ diff --git a/build/single_file_libs/zstd-in.c b/build/single_file_libs/zstd-in.c index f73a2e72..733dcb75 100644 --- a/build/single_file_libs/zstd-in.c +++ b/build/single_file_libs/zstd-in.c @@ -25,7 +25,7 @@ * Note: MEM_MODULE stops xxhash redefining BYTE, U16, etc., which are also * defined in mem.h (breaking C99 compatibility). * - * Note: the undefs for xxHash allow Zstd's implementation to coinside with with + * Note: the undefs for xxHash allow Zstd's implementation to coincide with with * standalone xxHash usage (with global defines). * * Note: multithreading is enabled for all platforms apart from Emscripten. diff --git a/build/single_file_libs/zstddeclib-in.c b/build/single_file_libs/zstddeclib-in.c index a5fd958e..cbf70c61 100644 --- a/build/single_file_libs/zstddeclib-in.c +++ b/build/single_file_libs/zstddeclib-in.c @@ -25,7 +25,7 @@ * Note: MEM_MODULE stops xxhash redefining BYTE, U16, etc., which are also * defined in mem.h (breaking C99 compatibility). * - * Note: the undefs for xxHash allow Zstd's implementation to coinside with with + * Note: the undefs for xxHash allow Zstd's implementation to coincide with with * standalone xxHash usage (with global defines). */ #define DEBUGLEVEL 0 diff --git a/doc/educational_decoder/zstd_decompress.c b/doc/educational_decoder/zstd_decompress.c index 62e6f0dd..93640708 100644 --- a/doc/educational_decoder/zstd_decompress.c +++ b/doc/educational_decoder/zstd_decompress.c @@ -2145,7 +2145,7 @@ static void FSE_init_dtable(FSE_dtable *const dtable, // "All remaining symbols are sorted in their natural order. Starting from // symbol 0 and table position 0, each symbol gets attributed as many cells - // as its probability. Cell allocation is spreaded, not linear." + // as its probability. Cell allocation is spread, not linear." // Place the rest in the table const u16 step = (size >> 1) + (size >> 3) + 3; const u16 mask = size - 1; diff --git a/doc/zstd_compression_format.md b/doc/zstd_compression_format.md index bb244d43..fc09bd55 100644 --- a/doc/zstd_compression_format.md +++ b/doc/zstd_compression_format.md @@ -1124,7 +1124,7 @@ These symbols define a full state reset, reading `Accuracy_Log` bits. Then, all remaining symbols, sorted in natural order, are allocated cells. Starting from symbol `0` (if it exists), and table position `0`, each symbol gets allocated as many cells as its probability. -Cell allocation is spreaded, not linear : +Cell allocation is spread, not linear : each successor position follows this rule : ``` diff --git a/lib/README.md b/lib/README.md index aab0869a..4c9d8f05 100644 --- a/lib/README.md +++ b/lib/README.md @@ -125,7 +125,7 @@ The file structure is designed to make this selection manually achievable for an `ZSTD_getErrorName` (implied by `ZSTD_LIB_MINIFY`). Finally, when integrating into your application, make sure you're doing link- - time optimation and unused symbol garbage collection (via some combination of, + time optimization and unused symbol garbage collection (via some combination of, e.g., `-flto`, `-ffat-lto-objects`, `-fuse-linker-plugin`, `-ffunction-sections`, `-fdata-sections`, `-fmerge-all-constants`, `-Wl,--gc-sections`, `-Wl,-z,norelro`, and an archiver that understands diff --git a/lib/common/compiler.h b/lib/common/compiler.h index ddbba550..a9062d0f 100644 --- a/lib/common/compiler.h +++ b/lib/common/compiler.h @@ -40,7 +40,7 @@ /** On MSVC qsort requires that functions passed into it use the __cdecl calling conversion(CC). - This explictly marks such functions as __cdecl so that the code will still compile + This explicitly marks such functions as __cdecl so that the code will still compile if a CC other than __cdecl has been made the default. */ #if defined(_MSC_VER) diff --git a/lib/compress/huf_compress.c b/lib/compress/huf_compress.c index 07d57f55..facec330 100644 --- a/lib/compress/huf_compress.c +++ b/lib/compress/huf_compress.c @@ -760,7 +760,7 @@ typedef struct { } HUF_CStream_t; /**! HUF_initCStream(): - * Initializes the bistream. + * Initializes the bitstream. * @returns 0 or an error code. */ static size_t HUF_initCStream(HUF_CStream_t* bitC, @@ -779,7 +779,7 @@ static size_t HUF_initCStream(HUF_CStream_t* bitC, * * @param elt The element we're adding. This is a (nbBits, value) pair. * See the HUF_CStream_t docs for the format. - * @param idx Insert into the bistream at this idx. + * @param idx Insert into the bitstream at this idx. * @param kFast This is a template parameter. If the bitstream is guaranteed * to have at least 4 unused bits after this call it may be 1, * otherwise it must be 0. HUF_addBits() is faster when fast is set. diff --git a/lib/compress/zstd_compress.c b/lib/compress/zstd_compress.c index 09f18d6d..32e486cd 100644 --- a/lib/compress/zstd_compress.c +++ b/lib/compress/zstd_compress.c @@ -1333,7 +1333,7 @@ ZSTD_adjustCParams_internal(ZSTD_compressionParameters cPar, break; case ZSTD_cpm_createCDict: /* Assume a small source size when creating a dictionary - * with an unkown source size. + * with an unknown source size. */ if (dictSize && srcSize == ZSTD_CONTENTSIZE_UNKNOWN) srcSize = minSrcSize; diff --git a/lib/compress/zstd_compress_internal.h b/lib/compress/zstd_compress_internal.h index 9e4c1ac1..a9f3486d 100644 --- a/lib/compress/zstd_compress_internal.h +++ b/lib/compress/zstd_compress_internal.h @@ -392,7 +392,7 @@ struct ZSTD_CCtx_s { ZSTD_blockState_t blockState; U32* entropyWorkspace; /* entropy workspace of ENTROPY_WORKSPACE_SIZE bytes */ - /* Wether we are streaming or not */ + /* Whether we are streaming or not */ ZSTD_buffered_policy_e bufferedPolicy; /* streaming */ diff --git a/lib/compress/zstd_compress_superblock.c b/lib/compress/zstd_compress_superblock.c index 82b3ee23..bcbe158b 100644 --- a/lib/compress/zstd_compress_superblock.c +++ b/lib/compress/zstd_compress_superblock.c @@ -475,7 +475,7 @@ static size_t ZSTD_compressSubBlock_multi(const seqStore_t* seqStorePtr, /* I think there is an optimization opportunity here. * Calling ZSTD_estimateSubBlockSize for every sequence can be wasteful * since it recalculates estimate from scratch. - * For example, it would recount literal distribution and symbol codes everytime. + * For example, it would recount literal distribution and symbol codes every time. */ cBlockSizeEstimate = ZSTD_estimateSubBlockSize(lp, litSize, ofCodePtr, llCodePtr, mlCodePtr, seqCount, &nextCBlock->entropy, entropyMetadata, diff --git a/lib/compress/zstd_cwksp.h b/lib/compress/zstd_cwksp.h index 2656d26c..7ba90262 100644 --- a/lib/compress/zstd_cwksp.h +++ b/lib/compress/zstd_cwksp.h @@ -219,7 +219,7 @@ MEM_STATIC size_t ZSTD_cwksp_aligned_alloc_size(size_t size) { MEM_STATIC size_t ZSTD_cwksp_slack_space_required(void) { /* For alignment, the wksp will always allocate an additional n_1=[1, 64] bytes * to align the beginning of tables section, as well as another n_2=[0, 63] bytes - * to align the beginning of the aligned secion. + * to align the beginning of the aligned section. * * n_1 + n_2 == 64 bytes if the cwksp is freshly allocated, due to tables and * aligneds being sized in multiples of 64 bytes. diff --git a/lib/compress/zstd_ldm.c b/lib/compress/zstd_ldm.c index 45eebcce..19b99f27 100644 --- a/lib/compress/zstd_ldm.c +++ b/lib/compress/zstd_ldm.c @@ -478,7 +478,7 @@ static size_t ZSTD_ldm_generateSequences_internal( */ if (anchor > ip + hashed) { ZSTD_ldm_gear_reset(&hashState, anchor - minMatchLength, minMatchLength); - /* Continue the outter loop at anchor (ip + hashed == anchor). */ + /* Continue the outer loop at anchor (ip + hashed == anchor). */ ip = anchor - hashed; break; } diff --git a/lib/decompress/huf_decompress.c b/lib/decompress/huf_decompress.c index bfa72c34..9322c99a 100644 --- a/lib/decompress/huf_decompress.c +++ b/lib/decompress/huf_decompress.c @@ -429,7 +429,7 @@ size_t HUF_readDTableX1_wksp_bmi2(HUF_DTable* DTable, const void* src, size_t sr /* fill DTable * We fill all entries of each weight in order. - * That way length is a constant for each iteration of the outter loop. + * That way length is a constant for each iteration of the outer loop. * We can switch based on the length to a different inner loop which is * optimized for that particular case. */ diff --git a/lib/decompress/huf_decompress_amd64.S b/lib/decompress/huf_decompress_amd64.S index 83e3d756..769e5b3d 100644 --- a/lib/decompress/huf_decompress_amd64.S +++ b/lib/decompress/huf_decompress_amd64.S @@ -3,7 +3,7 @@ /* Calling convention: * * %rdi contains the first argument: HUF_DecompressAsmArgs*. - * %rbp is'nt maintained (no frame pointer). + * %rbp isn't maintained (no frame pointer). * %rsp contains the stack pointer that grows down. * No red-zone is assumed, only addresses >= %rsp are used. * All register contents are preserved. @@ -123,7 +123,7 @@ HUF_decompress4X1_usingDTable_internal_bmi2_asm_loop: subq $24, %rsp .L_4X1_compute_olimit: - /* Computes how many iterations we can do savely + /* Computes how many iterations we can do safely * %r15, %rax may be clobbered * rbx, rdx must be saved * op3 & ip0 mustn't be clobbered @@ -389,7 +389,7 @@ HUF_decompress4X2_usingDTable_internal_bmi2_asm_loop: subq $8, %rsp .L_4X2_compute_olimit: - /* Computes how many iterations we can do savely + /* Computes how many iterations we can do safely * %r15, %rax may be clobbered * rdx must be saved * op[1,2,3,4] & ip0 mustn't be clobbered diff --git a/lib/zdict.h b/lib/zdict.h index 75b05dbf..ac98a169 100644 --- a/lib/zdict.h +++ b/lib/zdict.h @@ -46,7 +46,7 @@ extern "C" { * * Zstd can use dictionaries to improve compression ratio of small data. * Traditionally small files don't compress well because there is very little - * repetion in a single sample, since it is small. But, if you are compressing + * repetition in a single sample, since it is small. But, if you are compressing * many similar files, like a bunch of JSON records that share the same * structure, you can train a dictionary on ahead of time on some samples of * these files. Then, zstd can use the dictionary to find repetitions that are @@ -132,7 +132,7 @@ extern "C" { * * # Benchmark levels 1-3 without a dictionary * zstd -b1e3 -r /path/to/my/files - * # Benchmark levels 1-3 with a dictioanry + * # Benchmark levels 1-3 with a dictionary * zstd -b1e3 -r /path/to/my/files -D /path/to/my/dictionary * * When should I retrain a dictionary? diff --git a/lib/zstd.h b/lib/zstd.h index 6709c65d..571d5fe9 100644 --- a/lib/zstd.h +++ b/lib/zstd.h @@ -247,7 +247,7 @@ ZSTDLIB_API size_t ZSTD_decompressDCtx(ZSTD_DCtx* dctx, * * It's possible to reset all parameters to "default" using ZSTD_CCtx_reset(). * - * This API supercedes all other "advanced" API entry points in the experimental section. + * This API supersedes all other "advanced" API entry points in the experimental section. * In the future, we expect to remove from experimental API entry points which are redundant with this API. */ @@ -1804,7 +1804,7 @@ ZSTDLIB_API size_t ZSTD_CCtx_refPrefix_advanced(ZSTD_CCtx* cctx, const void* pre * * Note that this means that the CDict tables can no longer be copied into the * CCtx, so the dict attachment mode ZSTD_dictForceCopy will no longer be - * useable. The dictionary can only be attached or reloaded. + * usable. The dictionary can only be attached or reloaded. * * In general, you should expect compression to be faster--sometimes very much * so--and CDict creation to be slightly slower. Eventually, we will probably diff --git a/programs/dibio.c b/programs/dibio.c index 49fa2118..e7fb905e 100644 --- a/programs/dibio.c +++ b/programs/dibio.c @@ -270,7 +270,7 @@ static fileStats DiB_fileStats(const char** fileNamesTable, int nbFiles, size_t int n; memset(&fs, 0, sizeof(fs)); - // We assume that if chunking is requsted, the chunk size is < SAMPLESIZE_MAX + // We assume that if chunking is requested, the chunk size is < SAMPLESIZE_MAX assert( chunkSize <= SAMPLESIZE_MAX ); for (n=0; n 0 && (FUZ_rand(&lseed) & 7) == 0) { DISPLAYLEVEL(6, "t%u: Modify nbWorkers: %d -> %d \n", testNb, nbWorkers, nbWorkers + iter); diff --git a/zlibWrapper/examples/fitblk.c b/zlibWrapper/examples/fitblk.c index 669b176e..8dc7071e 100644 --- a/zlibWrapper/examples/fitblk.c +++ b/zlibWrapper/examples/fitblk.c @@ -119,7 +119,7 @@ local int recompress(z_streamp inf, z_streamp def) if (ret == Z_MEM_ERROR) return ret; - /* compress what was decompresed until done or no room */ + /* compress what was decompressed until done or no room */ def->avail_in = RAWLEN - inf->avail_out; def->next_in = raw; if (inf->avail_out != 0) diff --git a/zlibWrapper/examples/fitblk_original.c b/zlibWrapper/examples/fitblk_original.c index 20f351bf..723dc002 100644 --- a/zlibWrapper/examples/fitblk_original.c +++ b/zlibWrapper/examples/fitblk_original.c @@ -109,7 +109,7 @@ local int recompress(z_streamp inf, z_streamp def) if (ret == Z_MEM_ERROR) return ret; - /* compress what was decompresed until done or no room */ + /* compress what was decompressed until done or no room */ def->avail_in = RAWLEN - inf->avail_out; def->next_in = raw; if (inf->avail_out != 0)