Changes between Version 6 and Version 7 of u/erica/memoryBaseline


Ignore:
Timestamp:
02/05/14 13:36:44 (11 years ago)
Author:
Erica Kaminski
Comment:

Legend:

Unmodified
Added
Removed
Modified
  • u/erica/memoryBaseline

    v6 v7  
    19194. Then in addition to total memory being used for info allocations (in astrobear.log), one should also add this "baseline" memory when trying to estimate the total memory the simulation is using. For example, if my baseline is 0.2 GB/processor, than the baseline = 1.6 GB per node with 8 processors. Now we can use that when trying to understand the total memory being used, as reported in bluehive.log at the end of a simulation, as the following tables shows.
    2020
    21 For total info allocations being reported as ~ 200 mb (~.2 GB), 1 node/8 processors :
     21For total info allocations being reported as ~ 200 mb (~.2 GB), 1 node/8 processors, bluehive.log reports :
    2222|| Resource || Requested || Used    ||
    2323|| memory (GB)|| 16.0    || 2.0     ||
     
    3333|| memory (GB)|| 40    || 80     ||
    3434
     35Now, to explain the difference between the number astrobear.log reports for the total info allocations, and the memory being reported as actually being used by the cluster in bluehive.log, we add the baseline memory to the info allocations. This gives, .2+1.6=1.8 GB, .2+2*1.6=3.4 GB, and 50+8*1.6=62.8 GB. The difference between the info+baseline and the actual used memory is largest for the simulation running on more nodes. The difference might be due to redundant calculations (ghost cells) and parallelization.