.. _testingcode-main: """"""""""""""""""" Testing Dakota Code """"""""""""""""""" ========== Unit Tests ========== Unit tests are intended for testing specific units, classes and functions when they can be readily constructed (and/or provided mocks) as needed. Unit testing also serves as a mechanism for Test Driven Development (TDD) which represents a best practice for implementing new capability. Just a few of the benefits of TDD include the following: - Enforces (and measures) modularity of code and functionality - Encourages incrementally correct development - Ensures correct behavior - Documents API and software contract - Promotes code coverage and other Software Quality Assurance (SQA) metrics Historically, Dakota has used both Boost.Test and Teuchos Unit Test features but has recently officially adopted the former. For a minimal example for unit testing see: - :file:`src/unit/min_unit_test.cpp` (Boost.Test unit_test.hpp) - :file:`src/unit/leja_sampling.cpp` (Boost.Test minimal.hpp) Some more recent / modern examples include: - :file:`src/util/unit/MathToolsTest.cpp` - :file:`src/util/unit/LinearSolverTest.cpp` - :file:`src/surrogates/unit/PolynomialRegressionTest.cpp` To add a test with a TDD mindset: #. Call out the new unit test source file, e.g. ``my_test.cpp``, in ``CMakeLists.txt``,e.g. :file:`src/surrogate/unit/CMakeLists.txt`. See helper functions: dakota_add_unit_test (adds test, links libraries, registers with ctest), dakota_copy_test_file (copies with dependency). The build will fail because the file does not yet exist. #. Add a new file ``my_test.cpp`` with a failing test macro, e.g. ``BOOST_CHECK(false)`` to verify it builds but the test fails. Name files and associated data directories in a helpful and consistent manner. #. Use Boost utilities/macros to assess individual test conditions PASS / FAIL as needed. #. Compile and run with ``ctest –L UnitTest`` or ``ctest –R my_test``. #. Iteratively add and refine tests and modify Dakota core source code as the capability evolves. To run all unit tests: .. code-block:: cd dakota/build/ # Run all unit tests: ctest -L (--label-regex) UnitTest # With detailed output at top-level: ctest -L (--label-regex) UnitTest -VV (--extra-verbose) # To run a single test, via regular expression: ctest -L UnitTest -R surrogate_unit_tests A failing CTest unit test can be diagnosed using the following as a starting point: .. code-block:: cd build/src/unit (or other directory containing the test executable) # First, manually run the failing test to see what information is provided related to the failure(s): ./surrogate_unit_tests # To see available Boost Test options: ./surrogate_unit_tests --help # To get detailed debugging info from a unit test: ./surrogate_unit_tests --log_level all .. note:: A google search can also provide current best practices with Boost.Test and specifics related to the details of the test failure(s) ================ Regression Tests ================ Regression tests compare the output of complete Dakota studies against baseline behavior to ensure that changes to the code do not cause unexpected changes to output. Ideally they are fast running and use models with known behavior such as polynomials or other canonical problems. The following are a few key concepts in Dakota's regression test system: - In the source tree, most important test-related content is located in the test/ directory. Test files are named dakota_*.in. Each test file has a baseline file named dakota_*.base. Some tests have other associated data files and drivers. - Configuring Dakota causes test files and associated content to be copied to subfolders within the test/ folder of the build tree. This is where they will be run. - A single test file can contain multiple numbered serial and parallel subtests. Each subtest, after extraction from the test file, is a valid Dakota input file. - Tests usually should be run using the ``ctest`` commmand. CTest uses the script dakota_test.perl, which is located in the test directory, to do most of the heavy lifting. This script can be run from the command line, as well. Run it with the argument ``--man`` for documentation of its options. - Subtests can be categorized and described using CTest labels. (use ``ctest --print-labels`` in a build tree to view labels of existing tests). One purpose of labels is to state whether an optional component of Dakota is needed to run the test. Running Regression Tests ------------------------ Dakota's full regression test suite contains approxiately 300 test files and more than a thousand subtests. It typically takes between several tens of minutes to a few hours to complete, depending on available computing resouces. The test system executes Dakota for each subtest, collects the output, and compares it to a baseline. There are three possible results for a subtest: - **PASS**: Dakota output matched the baseline to within a numerical tolerance - **DIFF**: Dakota ran to completion, but its output did not match the baseline - **FAIL**: Dakota did not run to completion (it failed to run altogether or returned nonzero) In a Dakota build tree, the ``ctest`` command is the best way to run Dakota tests, including regression tests. Running the command with no options runs all the tests sequentially. A few helpful options: - ``-j N``: Run N tests concurrently. Be aware that some of Dakota's regression tests may make use of local or MPI parallelism and may use multiple cores. - ``-L