Meeting update - Erica

Colliding Flows

Adjusted Christina's set-up (15-degree shear angle) slightly and sent to Bluehive where I was able to run it on 64 cores to about 30/200 frames. The amount of memory for the simulation is growing with the filling fractions, and the time it is going to take to complete there is > 1 month. I asked her to check the output and verify it looks good on Friday. Will ping her again today. Am ultimately going to have to move this to a bigger machine to cut the time it will take to finish significantly, but am currently running into some issues in compiling on bluestreak, will sit down with Baowei to work on it.

Also, as I have seen in the past, the memory being used by info arrays in the simulation is different than the actual memory usage on the nodes. I am seeing a difference of again ~30 GB over the 8 nodes I am using. When I ran a baseline test for the memory usage of the code (by running very small sim. with no multiphysics), I was getting the base usage as about 1.6 GB per node (or 200 mb per processor >> 17 mb astrobear is on disk), which should account for ~12GB over the nodes. Not sure about the additional memory here being used..

Paper submitted to ApJ and Astroph

Still playing with optimizing refinement

Can do geometric refinement, but when use with refinement object, get wonky behavior. Need to look into this a bit more.

Some documentation updates to the wiki

Updated standard out page

The job sizes page needs a bit more work still, Question - how many aux variables in 2.5 D and 2.5 D with angular momentum?

Also want to update the debugging page to include stack memory issues (definition of stack and the quiet seg fault you see with this type of limit, and also how fortran can run out of stack fast). Is default stack 10 mb on most clusters?, and this is per node right?

Comments

No comments.