Automated black box regression testing tool

Retest makes it simple to automate black box regression testing on Windows and Unix.

Retest works by reading a retest plan (.rt plain text file) and either generating expected files or generating actual files and comparing them with previously generated expecteds, reporting any discrepencies. (It can also be used purely to generate files.)

All you need to do to use retest (beyond the easy one-off process of installing it), is to create a suitable retest plan file for each application you want to test.

For developers, retest can also be used as a Rust library; see the Retest API.

Retest is free open source software (FOSS) licensed under the GNU General Public License version 3 (GPLv3).

Command Line Interface

retest v4.0.10 © 2019-21 Qtrac Ltd. All Rights Reserved.

usage: retest [verbose] [cpus=n] [nocolor] [tests] [rt.rt]
              run all (or specified numbered) tests and
              save their outputs in the actuals folder and
              diff their outputs with the expecteds

usage: retest [verbose] [cpus=n] [nocolor] [tests] generate [rt.rt]
              run all (or specified numbered) tests and
              save their outputs in the expecteds folder
              (g -g --generate gen generate)

usage: retest doc
              show the manual in your web browser and quit
              (doc -m --manual manual)

usage: retest help
              show this help text and quit (help -h --help /?)

usage: retest version
              show retest's version and quit
              (version -V --version)

verbose: default is: show summary, errors, failures;
         use one verbose to show each test; use two for more;
         use quiet to only show errors or failures
         (v -v --verbose verbose q -q --quiet quiet)
cpus:    if specified uses at most this number of cpus;
         default is to use all available
nocolor: if specified output is monochrome (useful for redirecting)
         default is to use colors (nocolor --nocolor mono --mono)
tests:   numbers of specific tests to run or generate,
         e.g., 1,3,5,8-21 36-39 52 61-65
rt.rt:   the retest plan file to use; defaults to
	 rt-{win,unix}.rt if it exists, otherwise to rt.rt

The command line arguments may be given in any order.

License: GNU General Public License Version 3.

Setting cpus=1 is useful if you want to force generation or testing to be done one test at a time in order (e.g., to see the total time). Note that if any non-existent test numbers are specified, they will be silently ignored.

Retest's exit code/error level is 0 if there were no failures or errors generating or testing, or 1 if any failures or errors occurred, or 2 for any other kind of error.

Retest Plan Files

If you specify a .rt retest plan file on the command line, retest will use it. Otherwise, on Windows, retest will look for rt-win.rt and use that if it exists, falling back to rt.rt otherwise. Similarly, on Unix, retest will look for rt-unix.rt and use that if it exists, falling back to rt.rt otherwise.

Every retest plan file starts with an optional “environment” section and then has one or more “test” sections. (Retest Plan File Examples are shown further on.)

Blank lines and comment lines (beginning with #) are ignored and may be used freely.

Retest Plan Environment Section

If present at all, this section must come first. It has this form (required, optional, user-specified):

APP: application-to-test
EXPECTED_PATH: path-for-expected-files
ACTUAL_PATH: path-for-actual-files
DIFF: comparison-application
SET: user-key: user-value

Here are explanations of the arguments that can (or must) be used:

Optional. If most or every test is of the same application it is best to put the application here with its path. Note that if you omit this, then you'll need to specify the application individually in every test's section. For example:
APP: C:\Program Files\diffpdfc\diffpdfc.exe
You may specify zero or more arguments, each one indented on its own line. If present these will be put before any arguments given in the test sections themselves. For example:
APP: C:\Python36\python.exe
(On Unix an interpreter will normally be found in the $PATH, but on Windows—or when you want to use a particular version when two or more are present—it will need to be specified, as illustrated here.)
Optional. This defaults to rt_expected in the current folder. It must be to a writable folder. The files generated here are the “expecteds” and are meant to be preserved between runs (to compare against), so the folder needs to be somewhere permanent. For example:
EXPECTED_PATH: V:\diffpdf5\rt_expected
Optional. This defaults to rt_actual in the current folder. It must be to a writable folder. The files generated here are the “actuals” which will be compared with the expecteds. For example:
ACTUAL_PATH: V:\tmp\rt_actual
Optional. Retest is capable of comparing text files, JSON files, image files (in some common formats), and binary files. However, if most or all your tests need to use a custom comparison “diff” tool, then you can specify it here. For example:
DIFF: C:\bin\com­pare­pdf­cmd\com­pare­pdf­cmd.exe
If you specify a custom tool, you may also specify zero or more arguments, each one indented on its own line. If present these will be put before any arguments given in the test sections themselves. For example:
DIFF: diff
Alternatively, you can specify the diff to use individually for each test (e.g., if they vary). Note that if you use a custom tool it must return (exit code/error level) 0 for when the two files compared are considered to be the same and non-zero otherwise.
user-key: user-value
Zero or more, each on its own line. You can specify your own “environment” variables using SET: entries. For example:
SET: INV: E:\accounts\invoices
Now, in any entry for any test, you can use $INV, e.g., $INV\inv681.pdf, and this will be expanded as you'd expect into E:\accounts\invoices\inv681.pdf.
Note that user-keys are case-sensitive (e.g., Q is different from q), and that user-values may not themselves contain variables (except for $HOME).

Retest Plan Test Sections

Each retest plan file must have at least one test. Tests are numbered from 1. Each plan has this form:

NAME: name-or-description
EXITCODE: expected-exit-code
WAIT: wait-time-seconds
STDIN: stdin-filename
STDOUT: stdout-filename
APP: application-to-test
DIFF: comparison-application

Here are explanations of the arguments that can (or must) be used:

Required. Must start from 1, and must be unique. For example:
Optional. A test name or description that will appear when tests pass. For example:
NAME: Bug #X0513 (PNG output)
Optional. The exit code the application to test is expected to return. The default is 0, so this must be specified if a non-zero exit code is expected. For example:
Optional. How long retest should wait before running the application to test. The default is 0.0 secionds, i.e., don't wait at all. This is useful for tests that “outrun” the operating system and fail needlessly, but which pass if the operating system is given a short time to catch up before running the test. For example:
WAIT: 0.25
Optional. If the application to test is an interactive console program that expects input from the user, the expected input can be stored in this file in which case the file's contents will be fed to the application to test's stdin as if entered by the user. For example:
STDIN: app-stdin26.txt
Optional. If the application to test outputs to stdout rather than to a file, use this entry to save the stdout to a file which can then be compared. The file will be written to the EXPECTED_PATH if generating or to the ACTUAL_PATH if testing. For example:
STDOUT: 26.json
Required. Use $APP to use the APP value from the [ENV] section; otherwise put the application to test with its path. Typically, the application is set in the [ENV] section, which simplifies this entry, in most cases reducing it, for example, to:
You may specify zero or more arguments, each one indented on its own line. (These arguments always follow any that are given in the [ENV] section's APP entry.) At least one should be the output filename with the form $OUT_PATH/filename.ext (using Windows \ or Unix / path separators on Windows; or / on Unix)—unless you are using the STDOUT entry.
When generating, the $OUT_PATH will be set to the EXPECTED_PATH value, and when testing it will be set to the ACTUAL_PATH value. (Retest Plan File Examples are shown below.)
retest can detect and compare images in common formats or JSON files, using the file's suffix; otherwise it compares files assuming they are UTF-8 encoded plain text (and ignores line-endings). However, if you only want to compare the application's exit code then set DIFF: no (or 0 or false). If you have plain text that isn't UTF-8 (or 7-bit ASCII) encoded, you can force retest to use a comparison of your choice. For example, you can set DIFF: rt-binary to force binary comparison, or force JSON, image, or text comparisons using rt-json or rt-image or rt-text.
Alternatively, you can use an external comparison program, in which case you can also provide zero or more one per line indented argument-for-comparison-application entries (These arguments always follow any that are given in the [ENV] section's DIFF entry.) For example, here's how to use Unix diff (rather than retest's built-in text comparison) to compare text while ignoring trailing whitespace at the end of each line:
DIFF: diff
Note that if you set DIFF: in the [ENV] section, you can use that setting in each test simply by using:
(See Example #3.)

Note that in addition to using $OUT_PATH (which will be automatically set to either EXPECTED_PATH or ACTUAL_PATH), you can also use $HOME which will be set to your home folder (on all platforms).

Retest Plan Examples

Example #1

APP: C:\Program Files\diffpdfc\diffpdfc.exe

NAME: Invoice check

NAME: Selected pages appearance check
DIFF: no

This example has two tests. (Note that all commands shown below occupy one line each but may be wrapped by the browser.)

When generating expecteds the first command line will be:
"C:\Program Files\diffpdfc\diffpdfc.exe" -q -r rt_expected\01.csv V:\pdfs\invoice_old.pdf V:\pdfs\invoice_new.pdf
and the second will be:
"C:\Program Files\diffpdfc\diffpdfc.exe" -q -a --pages2=1-3,6-8 V:\pdfs\pages-a1-1-6.pdf V:\pdfs\pages-a2-1-3,6-8.pdf

When retest is used to run and compare, the first test is expected to produce an exit code/error level of 4, and to output rt_actual\01.csv which is expected to be identical to rt_expected\01.csv. If either of these isn't true a failure will be reported. Note that retest will do a text comparison since that's the default for non-image non-JSON files.

For the second test no files are compared (due to the DIFF: no line), and the exit code is expected to be 0.

Example #2


NAME: JSON output

NAME: Binary output
DIFF: rt-binary

NAME: Text output (ignoring trailing whitespace differences)
DIFF: diff

NAME: Captured output
STDOUT: 04.txt
APP: $HOME/bin/

NAME: Interactive usage
STDIN: stdin05.txt
STDOUT: 05.txt

Here, test 1 is expected to have an exit code of 0 (the default) and to produce a UTF-8 encoded JSON file. Retest's JSON comparison compares the actual JSON values and ignores any superfluous whitespace. (Force a text comparison using DIFF: rt-text if you want to compare JSON files as text.)

Test 2 is expected to produce a binary file, so we have used DIFF: rt-binary to force retest to compare byte-by-byte.

For test 3, we have chosen to use an external diff tool. Notice that for this we must use the $EXPECTED_PATH and the $ACTUAL_PATH so that we can give the external comparison tool the generated expected and the newly created actual to compare.

For test 4 we have an application that outputs to stdout so we tell retest to capture that output to a file which can then be compared.

Test 5 checks interactive usage. The input that the user is expected to enter is in the file stdin05.txt—this is fed into the application as if entered by the user. And the program's output (which is to the console, i.e., stdout) is captured into a file that is then generated or compared against. (Note that on Windows it may sometimes be necessary to use DIFF: rt-binary when using STDOUT.)

Example #3

DIFF: diff
SET: TD: test_data



In this example we are using an external Unix diff tool for both tests. Because we aren't using one of retest's built-in comparisons, we must specify the two files to compare.

So, for example, when generating, (retest g) the two command lines will be:

./ test_data/30.dat rt_expected/30.txt
./ -a test_data/31.dat rt_expected/31.txt

On Windows, they will start with of course, and the [ENV] section's APP may need to specify the interpreter, e.g.:

APP: C:\Python36\python.exe

And when testing and comparing (retest v) the command lines will be:

./ test_data/30.dat rt_actual/30.txt
diff -q -Z rt_expected/30.txt /tmp/rt_actual/30.txt
./ -a test_data/31.dat rt_actual/31.txt
diff -q -Z rt_expected/31.txt rt_actual/31.txt

Example #4

APP: X:\build\reporter.exe
EXPECTED_PATH: X:\build\rt_expected
ACTUAL_PATH: U:\rt_actual
DIFF: T:\bin\com­pare­pdf­cmd\com­pare­pdf­cmd.exe

NAME: Compare Words (Terms and Conditions)

NAME: Compare Appearance (Advert)

This example shows how you might automate the testing of an application that produces .pdf files that you want to compare using com­pare­pdf­cmd. You could use the same approach to compare using diffpdfc, except in that case the ENV section's DIFF part would be something like this:

DIFF: T:\bin\diffpdfc\diffpdfc.exe

In the first test the application to test (reporter.exe) reads a configuration file and outputs a .pdf which is then compared using com­pare­pdf­cmd. The second test is similar, only the comparison is done by appearance rather than the default of comparing words.



  1. Download and unzip (994KB; MD5 830aeec6e8ada4eb8dedf2640dd17552). This contains retest.exe.
  2. Either move retest.exe into a folder on your %PATH%, or add the folder where you've put it to your %PATH%.

On Windows, retest is supplied as a 64-bit executable. If you require a 32-bit version you'll need to install Rust 1.31 or better and either build it yourself using a 32-bit toolchain or by doing cargo install qtrac-retest which will download, build, and install retest.exe.

Unix-like Platforms

Retest is provided in source form and requires Rust 1.31 or better installed.

The simplest way is to do cargo install qtrac-retest. This will download, build, and install the retest executable.

  1. Download and unpack retest-4.0.10.tar.gz (39KB; MD5 029f27d1d73c8b14d56615f7c0b7ecc6).
  2. Change into the retest directory and run:
    cargo build --release
  3. Copy the exectuable target/release/retest to somewhere on your $PATH or use cargo install.