Writing and running tests

Jbuilder tries to streamline the testing story as much as possible, so that you can focus on the tests themselves and not bother with setting up with various test frameworks.

In this section, we will explain the workflow to deal with tests in Jbuilder. In particular we will see how to run the testsuite of a project, how to describe your tests to Jbuilder and how to promote tests result as expectation.

We distinguish two kinds of tests: inline tests and custom tests. Inline tests are usually written directly inside the ml files of a library. They are the easiest to work with and usually requires nothing more than writing (inline_tests) inside your library stanza. Custom tests consist on executing an executable and sometimes do something afterwards, such as diffing its output.

Running tests

Whatever the tests of a project are, the usual way to run tests with Jbuilder is to call jbuilder runtest from the shell. This will run all the tests defined in the current directory and any sub-directory recursively. You can also pass a directory argument to run the tests from a sub-tree. For instance jbuilder runtest test will only run the tests from the test directory and any sub-directory of test recursively.

Note that in any case, jbuilder runtest is simply a short-hand for building the runtest alias, so you can always ask Jbuilder to run the tests in conjunction with other targets by passing @runtest to jbuilder build. For instance:

$ jbuilder build @install @runtest
$ jbuilder build @install @test/runtest

Inline tests

There are several inline tests framework available for OCaml, such as ppx_inline_test and qtest. We will use ppx_inline_test as an example as at the time of writing this document it has the necessary setup to be used with Jbuilder out of the box.

ppx_inline_test allows to write tests directly inside ml files as follow:

let rec fact n = if n = 1 then 1 else n * fact (n - 1)

let%test _ = fact 5 = 120

The file has to be preprocessed with the ppx_inline_test ppx rewriter, so for instance the jbuild file might look like this:

 ((name foo)
  (preprocess (pps (ppx_inline_test)))))

In order to instruct Jbuilder that our library contains inline tests, all we have to do is add an inline_tests field:

 ((name foo)
  (preprocess (pps (ppx_inline_test)))))

We can now build and execute this test by running jbuilder runtest. For instance, if we make the test fail by replacing 120 by 0 we get:

$ jbuilder runtest
File "src/fact.ml", line 3, characters 0-25: <<(fact 5) = 0>> is false.

FAILED 1 / 1 tests

Note that in this case Jbuild knew how to build and run the tests without any special configuration. This is because ppx_inline_test defines an inline tests backend and it is used by the library. Some other frameworks, such as qtest don’t have any special library or ppx rewriter. To use such a framework, you must tell Jbuilder about it since it cannot guess it. You can do that by adding a backend field:

 ((name foo)
  (inline_tests ((backend qtest)))))

Inline expectation tests

Inline expectation tests are a special case of inline tests where you write a bit of OCaml code that prints something followed by what you expect this code to print. For instance, using ppx_expect:

let%expect_test _ =
  print_endline "Hello, world!";
    Hello, world!

The test procedure consist of executing the OCaml code and replacing the contents of the [%expect] extension point by the real output. You then get a new file that you can compare to the original source file. Expectation tests are a neat way to write tests as the following test elements are clearly identified:

  • the code of the test
  • the test expectation
  • the test outcome

You can have a look at this blob post to find out more about expectation tests. But Back to Jbuilder, the workflow for expectation tests is always as follow:

  • write the test with some empty expect nodes in it
  • run the tests
  • check the suggested correction and promote it as the original source file if you are happy with it

Jbuilder makes this workflow very easy, simply add ppx_expect to your list of ppx rewriters as follow:

 ((name foo)
  (preprocess (pps (ppx_expect)))))

Then calling jbuilder runtest will run these tests and in case of mismatch Jbuilder will print a diff of the original source file and the suggested correction. For instance:

$ jbuilder runtest
File "src/fact.ml", line 5, characters 0-1:
let rec fact n = if n = 1 then 1 else n * fact (n - 1)

let%expect_test _ =
  print_int (fact 5);
-  [%expect]
+  [%expect{| 120 |}]

In order to accept the correction, simply run:

$ jbuilder promote

You can also make Jbuilder automatically accept the correction after running the tests by typing:

$ jbuilder runtest --auto-promote

Finally, some editor integration is possible to make the editor do the promotion and make the workflow even smoother.

Specifying inline test dependencies

If your tests are reading files, you must say it to Jbuilder by adding a deps field the the inline_tests field. The argument of this deps field follows the usual Dependency specification. For instance:

 ((name foo)
  (inline_tests ((deps (data.txt))))
  (preprocess (pps (ppx_expect)))))

Passing special arguments to the test runner

Under the hood, a test executable is built by Jbuilder. Depending on the backend used this runner might take useful command line arguments. You can specify such flags by using a flags field, such as:

 ((name foo)
  (inline_tests ((flags (-foo bar))))
  (preprocess (pps (ppx_expect)))))

The argument of the flags field follows the Ordered set language.

Using additional libraries in the test runner

When tests are not part of the library code, it is possible that tests require additional libraries than the library being tested. This is the case with qtest as tests are written in comments. You can specify such libraries using a libraries field, such as:

 ((name foo)
  (inline_tests ((backend qtest)
                 (libraries (bar))))))

Defining your own inline test backend

If you are writing a test framework, or for specific cases, you might want to define your own inline tests backend. If your framework is naturally implemented by a library or ppx rewriter that the user must use when they want to write tests, then you should define this library has a backend. Otherwise simply create an empty library with the name you want to give for your backend.

In order to define a library as an inline tests backend, simply add an inline_tests.backend field to the library stanza. An inline tests backend is specified by thee parameters:

  1. How to create the test runner
  2. How to build the test runner
  3. How to run the test runner

These three parameters can be specified inside the inline_tests.backend field, which accepts the following fields:

(generate_runner   <action>)
(runner_libraries (<ocaml-libraries>))
(flags             <flags>)
(extends          (<backends>))

For instance:

<action> follows the User actions specification. It describe an action that should be executed in the directory of libraries using this backend for their tests. It is expected that the action produces some OCaml code on its standard output. This code will constitute the test runner. The action can use the following additional variables:

  • ${library-name} which is the name of the library being tested
  • ${impl-files} which is the list of implementation files in the library, i.e. all the .ml and .re files
  • ${intf-files} which is the list of interface files in the library, i.e. all the .mli and .rei files

The runner_libraries field specifies what OCaml libraries the test runner uses. For instance, if the generate_runner actions generates something like My_test_framework.runtests (), the you should probably put my_test_framework in the runner_libraries field.

If you test runner needs specific flags, you should pass them in the flags field. You can use the ${library-name} variable in this field.

Finally, a backend can be an extension of another backend. In this case you must specify by in the extends field. For instance, ppx_expect is an extension of ppx_inline_test. It is possible to use a backend with several extensions in a library, however there must be exactly one root backend, i.e. exactly one backend that is not an extension of another one.

When using a backend with extensions, the various fields are simply concatenated. The order in which they are concatenated is unspecified, however if a backend b extends of a backend a, then a will always come before b.

Example of backend

In this example, we put tests in comments of the form:

(*TEST: assert (fact 5 = 120) *)

The backend for such a framework looks like this:

 ((name simple_tests)
   ((generate_runner (run sed "s/(\\*TEST:\\(.*\\)\\*)/let () = \\1;;/" ${impl-files}))

Now all you have to do is write (inline_tests ((backend simple_tests))) wherever you want to write such tests. Note that this is only an example, we do not recommend using sed in your build as this would cause portability problems.

Custom tests

We said in Running tests that to run tests Jbuilder simply builds the runtest alias. As a result, to define cutsom tests, you simply need to add an action to this alias in any directory. For instance if you have a binary tests.exe that you want to run as part of running your testsuite, simply add this to a jbuild file:

 ((name   runtest)
  (action (run ./tests.exe))))

Diffing the result

It is often the case that we want to compare the output of a test to some expected one. For that, Jbuilder offers the diff command, which in essence is the same as running the diff tool, except that it is more integrated in Jbuilder and especially with the promote command. For instance let’s consider this test:

 (with-stdout-to tests.output (run ./tests.exe)))

 ((name runtest)
  (action (diff tests.expected test.output))))

After having run tests.exe and dumping its output to tests.output, Jbuilder will compare the latter to tests.expected. In case of mismatch, Jbuilder will print a diff and then the jbuilder promote command can be used to copy over the generated test.output file to tests.expected in the source tree. This provides a nice way of dealing with the usual write code, run, promote cycle of testing. For instance:

$ jbuilder runtest
File "tests.expected", line 1, characters 0-1:
-Hello, world!
+Good bye!
$ jbuilder promote
Promoting _build/default/tests.output to tests.expected.

Note that if available, the diffing is done using the patdiff tool, which displays nicer looking diffs that the standard diff tool. You can change that by passing --diff-command CMD to Jbuilder.