Version 5 (modified by 12 years ago) ( diff ) | ,
---|
SUBROUTINE MakeChomboFile(nframe, tnow)
Defined In:
Inputs:
INTEGER
nframe. The number of the frame being written.
REAL
tnow. The current time of the simulation in seconds.
Description:
Writes the current state of the system to a Chombo-formatted HDF5 file. In parallel problems, this requires the use of data transfer functions defined in mpi_exec.f90.
Information written to the Chombo file includes:
- Number of dimensions.
- Problem domain.
- Number of levels of refinement.
- grid dimensions.
- Cell-centered data (
Info%q
). - Face- and edge-centered data (
Info%aux
).
MakeChomboFile()
runs on the master processor, and can acquire most of the data it needs from the tree structure that exists there. The grid data that resides on other processors is pulled in one NodeInfo structure at a time using MPI_CopyFieldsToMaster(), written to the Chombo file, and eliminated from the master processor using MPI_RemoveFieldsCopy(). These functions are not executed if xbear
is being used (meaning that the MPIBEAR
preprocessor tag is not defined).
For a full tutorial on using the HDF5 library, click here. When working with complex file structures like Chombo, it's easy for the number of HDF5 handles to get out of control; for this reason I've modularized as much Attribute and Dataset code as I can, and I encourage others to use the Add_Chombo_Attribute_*()
and Add_Chombo_Dataset_*()
subroutines wherever possible when modifying this routine. These subroutines leave no open handles behind them, so they are ideal when you just need to write a piece of data to the file and move on.
The way MakeChomboFile()
populates the main datasets data:datatype=0
and MHD_data
is worthy of special attention. In order to keep from holding all the data for an entire level on the master processor, we create a dataset specially configured to handle "slabbed" data; this allows us to fill the dataset piecemeal. The dataset is populated using the following steps:
- Initialize the dataset with enough room for all the data on the level.
- For each grid on the level:
- Copy the grid data to the master processor.
- Flatten the data to a 1D array.
- Write the flattened data array ("slab") to the dataset, starting from the current
offset
within the dataset. - Adjust
offset
to put the next slab in the dataset right after the one just written. - Eliminate the copied grid data from the master processor.
- Close the dataset.
VisIt simply ignores any data in the file that does not fit the Chombo format, so additional data can be added to the Chombo output files if necessary. Be wary of making the files too big, though; additional Attributes are unlikely to break the bank, but extra Datasets might.
Called In:
Modules Used:
HDF5 (USEd by beario module, and is essential to this subroutine).
Files Included:
None.