wiki:TestSuite

Version 25 (modified by martinhe, 13 years ago) ( diff )

AstroBEAR Test Suite

Any living code base requires checks to make sure that its continued development does not introduce any new bugs. AstroBEAR's developers are working to create a comprehensive lists of tests (the test suite) which runs a set of simulations in order to ensure the fundamental behavior of the code is robust and unchanged. Tests should run and report results with minimal interaction required. Ideally, running a test and viewing the result should be as easy as executing a single script.

The basic design philosophy is as follows:

  • There exist 3 levels of interactivity and completeness when running the tests —
    1. Verification tests, run via verifytests.s (Est. time: ? min)
    2. Individual tests, run via their own runtest.s files, located in their respective folders (within modules/tests/).
    3. General tests, run via alltests.s (Est. time: ? min)
    4. Showcase tests, run via show.s (Est. time: ? min)

The above scripts are all located in the code's main directory, and are intended to be executed in that location after a successful build of the code. Test result may be different depending on the test. For simple tests the result may consist of "pass" or "fail". For other tests, or in cases where failure is reported, the result may need to include certain figures, such as comparison figures (comparing the result versus the expected result).

Verification tests (verifytests.sh) should be ran by developers after ANY changes to the code, before checking them in. efter every code checkin in order to to ensure that. For simplicity, all verification tests will run mpibear on the current machine using 2 processors. More scripting will be needed to run larger showcase simulations if they need to run in parallel with 2+ cores in order to finish in a reasonable amount of time. General tests are meat to be automated periodical (overnight) test with no user interaction which will post results on web.

  • This will require scripts which correctly runs the executable on a given platform.
  • The run should be designed to produce data and exit in a repeatable fashion (i.e., not requiring the user to stop the simulation after a certain time).
  • Upon completion, the script needs to invoke bear2fix which, via the input-file option, processes the data and produces a result.

Old BasicTestingProcedure of verification tests.

Showcase tests are meant to illustrate whether the code functions as expected in a general sense when running well-understood simulations. These tests will produce images which show the endstate of the simulation. Inspection of these images by the user should confirm the simulation run; an indicator, or start point, that something is going wrong.


Current testing (6/30/11, under edition)

At the moment we do post-processing tests of astroBEAR2.0. Basically, tests consist on comparing the flow variables in a chombo file produced by the test modules, against a reference chombo file (see below). All test related files are located in modules/tests/. Each test has its own directory which contains:

  • the problem's data files: global.data, modules.data, problem.data, profile.data, solver.data, communication.data, io.data and physics.data,
  • the problem's module problem.f90,
  • two data files used by BearToFix to produce test reports: bear2fix.data.test and bear2fix.data.img,
  • a "ref" directory which contains the reference chombo file.

The reference chombo file. This file, chombo00001.hdf, has been produced with the code and verified by someone in our research group using well documented quantitative analytical or numerical studies. Reference chombos are included with the current distribution of the code, and information about them can be found in their corresponding test wiki pages.

Running tests. This is done using two shell scripts:

  • postprocess.s, that iterates over each of the test problem and calls the go.s script,
  • go.s, which basically cd's into the current test directory and executes BearToFix (see below), converts the images from ps to png and copies them back into the image directory. These images will them be linked to the corresponding test report wiki pages.

Tests report. Once test runs are finished, BearToFix will be executed. It will do the error computations and produce two png images showing the reference and the current result of the run. BearToFix is executed twice; the 1st time it reads bear2fix.data.img to set the color tables for the test report images, whereas the 2nd time it reads bear2fix.data.test to perform the error computation and produce the test report images.

Tests error calculations. These are done in BearToFix via L2norms and LInfinity norms (reference to be added soon).

To use BearToFix by yourself (e.g. in order to do further adjustment of the test report parameters), select operation 11, test 4 (Generic Test) and input the maximum mean error tolerance (see below) allowed for each fluid variable. BearToFix will then compute the errors and produce the test report images. You should then copy these images to the test images directory (reference to be added soon).

If the test fails, BearToFix outputs the current errors as well as suggested new tolerances (5% higher than the current errors). The user can then copy and paste these figures into the bear2fix.data.test file and run BearToFix again. Inspection of the text report wiki page should show why the test failed.

The test log file is located in the modules/tests directory and briefly keeps track of test results. We plan to add the log info into the test report images though the convert command, so the user can see the images and the error estimates at once.

List of tests

Verification tests

Test folder name Brief description Result type Est. time to run Test wiki page
01-1D_sod_shock-tube Runs 1D Sod shock tube with all combinations of integration schemes P/F with rel. error, text file 60 sec. (single-proc, grass) TestSuite/SodShockTube
07-Field_Loop_Advection Advects a loop of magnetic field around the grid with different field strengths and CT methods P/F with rel. error, text file 30 minutes (2-proc, grass) TestSuite/FieldLoop2D
??-Radiative shock instability Two related tests to verify the cooling source term(s) P/F with rel. error, text file unknown TestSuite/RadiativeShocks

Showcase tests

Test folder name Brief description Result type Est. time to run Test wiki page
06-2D_shocked_clump Over-dense clump in pressure equilibrium is overrun by a planar shock Multi-panel PNG image showing fluid variables (n, vx, vy, p, T, clump tracer) at end of simulation 5 min. (single-proc, grass) To-Do

Still to be tested:

  • Robustness of build on multiple architectures (MPICH2, OpenMPI, VAMPICH, BlueGene; Intel, GFortran)


CASTRO tests

  • Scalability
    • 643 weak scaling test
      • No gravity
      • Multipole gravity
      • Poisson gravity
  • hydrodynamic solver
    • Sod Shock Tube (Sod, 1978)
    • Double rarefaction (Toro, 1997)
    • Strong shock (Toro, 1997)
  • hydrodynamics, geometry
    • 1D Sedov-Taylor blast wave (Sedov, 1959)
    • 2D cylindrical Sedov-Taylor blast wave (Sedov, 1959)
    • 3D Cartesian Sedov-Taylor blast wave, 1D (Sedov, 1959)
  • hydrodynamics, gravity
    • Split piecewise linear, Rayleigh-Taylor (Taylor, 1950)
    • Unsplit piecewise linear, Rayleigh-Taylor (Taylor, 1950)
    • Split PPM (old limiter), Rayleigh-Taylor (Taylor, 1950)
    • Unsplit PPM old limiter), Rayleigh-Taylor (Taylor, 1950)
    • Split PPM (new limiter), Rayleigh-Taylor (Taylor, 1950)
    • Unsplit PPM (new limiter), Rayleigh-Taylor (Taylor, 1950)
Note: See TracWiki for help on using the wiki.