Parameterizing Fortran tests with pFUnit
I recently set about parameterizing a test suite for some Fortran code written using the pFUnit framework .
The process of doing so, given the sparse documentation around pFUnit, resulted in much bashing of my head against the wall. In the hope of saving others time in future I thought I’d write my experiences and key steps into a short post.
Introduction
I’m going to start by assuming that anyone reading this is familiar with Fortran, basic concepts around testing (unit tests, parameterization, etc.), and basic use of pFUnit for testing Fortran code.
I’ll look at an example code, including tests, and then work through the process of parameterizing them using the pFUnit utilities. I’ll also provide a short summary of gotchas and other things to be aware of when doing this.
All of the code described in this article is available in a GitHub repository , fully runnable with instructions and with comparisons, at github.com/jatkinson1000/pfunit-parameterization-demo .
Setup - a basic code to test
Consider the following simple Fortran module multiply_mod.f90 that
provides a function to multiply two floating point numbers together:
|
|
We use it in the following simple program (multiply_prog.f90) that
takes two numbers from the user and prints the product to the screen:
|
|
You can check this by compiling with the
CMakeLists.txt
or
Makefile
in the
git repository
associated with this post.
Simple pFUnit test
The following is a basic pFUnit file that defines a testing module
with a single test that checks that the result of 2 * 2 using our
multiply function is 4.
|
|
Parameterizing the tests
Anyone with unit testing experience will understand the need to
parameterize this test.
Currently we check 2 * 2 = 4, but what about multiplying two negative
numbers? Does that work? Or a positive and a negative? etc.
Ideally we would be able to test all of these without writing out the same test code multiple times and changing the numbers. And indeed, this is what test parameterization does for us.
In frameworks such as pytest for Python the process is rather easy, but doing so in compiled Fortran using pFUnit requires a little more leg work. Below I’ll set out how the simple test above can be modified to add parameterization.
1. Defining a testParameter
pFUnit provides the AbstractTestParameter parent type that can be
extended using the extends attribute.
We need extend this to add to its members the variables that we will be passing
to the test as parameters.
We also need to add a toString procedure (deferred and expected by
AbstractTestParameter as a Fortran function) that will be
used to output an identifier for a specific parameters set.
Finally we annotate this type as a test parameter using @testParameter
so that pFUnit knows to treat it accordingly.
So, to parameterize the test_multiply_numbers test above we need to provide
two numbers to be multiplied together (a and b) and an expected result.
It’s also nice to add a character variable to provide a short description
of the particular case the parameter set is used for (e.g. + * +, + * -, etc.).
This can be done as follows:
|
|
Here we simply use the character description in the toString method, as
a way of keeping the example concise.
However it is a good idea to add a formatted description of the specific
parameters to make it easy when debugging failed tests.
An example of this
is given below
.
2. Defining a testCase
The next step is to define a testCase type by extending another pFUnit
parent type: ParameterizedTestCase.
We define a specific child type for our test, and extend it with a variable of
the MultiplyTestParameters type so that the testcase can hold and use a
parameter set.
This has to be accompanied by a constructor function that will take a
specific set of parameters (as a MultiplyTestParameters type from above)
and set them as this member variable on a MultiplyTestCase.
Finally, similarly to the parameters above, the testCase needs to be
annotated using @testCase for the benefit of pFUnit with the name of its
constructor specified.
This can be done with the following:
|
|
3. Define a parameters getter
The last piece of infrastructure we need to provide is a parameters getter.
This is where we will define all of the various parameter sets we want to
run our test over.
It takes the form of a function that returns an array of
MultiplyTestParameters, each item of which defines a specific test case.
It will be used in the next and final step.
For us, this can be done as follows:
|
|
note how we use the desc character string to add a brief description of
what we are testing with each item.
4. Update the tests to handle parameters
The final step is to update the simple test from the original example to use this new infrastructure.
To do so requires adding the getter function as the testParameters in
the @test annotation.
This tells pFUnit to construct and run the test for each set of parameters
returned by get_multiply_parameters() getter.
At the same time we make the test subroutine take an argument this of
the MultiplyTestCase type.
Recall that this type is capable of holding a
MultiplyTestParameters type so will contain the parameters for the
current test case.
To access parameters from this inside the test we use type variable access
with % for variables with names as defined in MultiplyTestParameters.
For example, this%params%a and this%params%b are the two numbers to be
multiplied, and this%params%expected is the expected result as defined in
the MultiplyTestParameters type.
These replace the previously hardcoded numbers with those returned by
get_multiply_parameters().
Here is the updated test subroutine:
|
|
The single test will now be run with different parameters, keeping it concise and easy to maintain whilst increasing coverage and checking of edge cases.
Things to be aware of
There are a few gotchas and pitfalls I have come across that are worth being aware of when writing parameterized tests.
Beware the implied save!
In Fortran, if setting the value of a variable as part of its definition,
this is done as a one-off at compile time, not on each call to the
procedure.
As such, any updates to the variable will be saved or ‘remembered’,
persisting to subsequent calls.
See
Implied Save on Fortran-Lang
for more details.
This is something to be particularly wary of in parameterized tests as
the generated pFUnit code will loop over a subroutine several times.
Unless you are setting the value of a parameter at declaration you
should instead ensure that there is a clean assignment that occurs
separately:
|
|
This particular gotcha caused had me bashing my head for some time before someone else suggested that this could be the issue.
Checking for “almost equal” on arrays
pFUnit comes with some built-in checkers such as @assertTrue, and
@assertEqual, the latter of which can take an absolute tolerance.
However, if you want to operate on arrays or have more elaborate functions
that operate closer to numpy, e.g. with relative tolerance, you may need
to define your own assert_allclose etc. as we do in the
FTorch implementation
.
You can still leverage pFUnit’s utilities by applying @assertTrue() to the
result any custom function provided it returns a boolean.
If one test in a module is parameterized, all must be
If you define any test in a pFUnit test module to be parameterized, then
all tests in the submodule need to be parameterized. I.e. no bare @test
annotations.
This one caused me several hours of pain in debugging.
To get around this you can define a minimal fixture that provides a single set of dummy parameters (that go unused) to any tests in the module that do not need parameterizing, for example:
|
|
Better toString representations
In the main text I noted that it would be good to have a longer toString
representation that explicitly output the test parameters.
Here is an example of how we can achieve this, displaying the two numbers
we multiply and the expected result when running the tests:
e.g.:
|
|
Multiple fixtures
It is possible to define multiple fixture types in a single test module.
However, care may needs to be taken when setting an appropriate toString
method if differing behaviour is required.
For example customising how the parameters are displayed in the test string
output for different types.
Since AbstractTestParameter expects the deferred toString
type-bound procedure
the solution is to define explicit methods and then bind them to the
toString procedure of the appropriate parameter type using => operator.
For example, you might have one set of parameters for testing
multiplication and another for testing exponents, each with a differing
toString representation:
|
|
Implicit limit on test name length
Remember that for certain standards and compilers there is a default
limit on line length of 132 characters.
If the generated pFUnit code exceeds this then it will lead to a
truncation error when compiling.
This most often occurs in the call suite%addTest at the end of the
module and imposes an implicit limit of around 35 characters on
test/subroutine names, as the rest of the call for a custom test with
parameters becomes rather long.
Consider whether you really need “test” or “modulename” in the
individual test name given that we know these are tests and the suite name
is output as a header when running CTest.
Again, this lost me an entire morning trying to track down the source of the error in the generated testing code.
Summary
This article has walked through the process of parameterizing tests in pFUnit, adapting a basic test to define the infrastructure required to loop over a set of predefined parameter sets. Following these steps allows construction of concise and maintainable test suites for Fortran code, easily adding coverage and edge cases.
I hope that this description is useful to others out there to understand and use these tools in their own work. If so, please drop a star on the GitHub repository, get in touch, or grab me a coffee .
If you spot anything that is incorrect or could be made clearer, please get in touch.