Version 6 (modified by 12 years ago) ( diff ) | ,
---|
Distributed Processing in AstroBEAR
Distributed execution of AstroBEAR simulations is done with the mpibear
executable. Other than distributed processing, mpibear
has no special advantages over the normal xbear
executable.
AstroBEAR uses the MPI (Message Passing Interface) library), which uses a collection of send and receive functions to pass data and control signals between processors in the group. Some of these functions merely send or receive their data payload and resume execution, but others will block, or keep the calling process from continuing until some signal is received from another processor. This allows the program as a whole to keep its participating processors synchronized.
When mpibear
runs, one processor is designated the master processor and all others are worker processors. The master processor is responsible for initializing all the workers with the program's environment variables, parceling out work to other processors, and maintaining load balance between all the processors involved in the mpibear
run.
Nodes are assigned to processors through their associated NodeInfo structures (see DataStructures page for more information). Each NodeInfo structure has an attribute MPI_id
used to store the ID of the processor responsible for its calculations. The master processor will periodically reassign a node by changing its Info%MPI_id
value and then sending the node's data to the new processor.
For a more detailed description of the MPI library, visit this tutorial from the MHPCC.