Making an animation in Shape by example
FYI: I made my own Library (copying Erica). I wrote an outline about the 2007 Shape paper, here. In case anyone is interested about the point of Shape.
Since our group is trying to utilize the Shape software for our simulations, I figure I would start to document helpful components to visualizing and modeling in Shape. Here I will briefly explain how one can make a movie, or animation in Shape. Some days toward the end of the day, I am going to go through the templates on their website. With Shape however, I have two main objectives:
- Learn how to convert our hdf5 simulation data to ASCII, so that it can be fed into Shape.
- Learn how to visualize this external data in Shape. What limits are there and what can Shape tell us about our data compared to other visualization softwares?
So if the template I check out is useful, I'll make a blog post about it explaining the objective of the template. Eventually I'll use these to make a series of wikipages documenting Shape for our purposes. However I am yet to be a "Shape Master." Otherwise I would have done 1 & 2 already. This template is quite simple: how do you make a movie in Shape?
Results:
(Make sure you check out the .gifs! They are really pretty.)
![]() Image 1 | ![]() Image 2 |
BW GIF Red/Blue GIF | GIF |
![]() Image 3 | ![]() Image 4 |
GIF | GIF |
This data is from one of the Shape templates, titled Animation 1 - Rotation. It is a bipolar nebula rotating around all three x-y-z Cartesian axes. Image 1 is a 2D projection, comparable to our column density map movies. However the object is not evolving in time. Image 2 is the line profile for the velocity vs. pixel intensity as the object rotates. Image 3 is the mesh used for the object. Image 4 is its PV-Diagram.
Guide:
This screen-shot was taken post-rendering. Click the big button with the Shape-S in the top, left-hand corner to view the data as seen here. Prior to rendering, the PV-diagrams will be black. Note that all of the data parameters are already in the template. You can adjust the colors if you want too. For instance I have also made a movie color coded to indicate Doppler-shifting, i.e. Red/Blue (see above). Other options include grey scale, rainbow (color), red/blue, gradient, and spectrum. Click the movie-film looking button in the same row as the renderer to get to the animation module.
Image 6 is what the animation module looks like. Instead of clicking the rendering button, you'll click the animation button (which is circled and marked as the second step). First you want to adjust what format your output will be. Note the timeline at the bottom of the GUI. As Shape makes your animation, it will tell you how far it is in the process. On the right hand side of the Animation module is the variable tree. Below the variable tree, in the table, is what is referred to as the variable stack. Essentially the functionality of this part of the animation module in v. 5 seems not much different from v. 4. Y
Here is a table of screen shots of all the parameters listed in the animation module.
![]() Image 7. General | ![]() Image 8. Variable | ![]() Image 9. Output |
Image 7 (in order from top to bottom):
- Name of your file output
- Number of frames (where you want them to start and end in the simulation)
- Current frame tells you when Shape is in spitting your .png output (there is also a timeline at the bottom of the GUI that starts from first frame and ends at the number of the last frame) relative to the simulation
- Start and end times of your simulation for some given time units (years, days, hours, minutes, seconds).
- Like current frame, it tells you what current time your simulation is being animated at in your chosen units.
- Distribute & Fields: Using particles for rendering or output velocity vector information may require Shape to redistribute the particles in every frame. This is done by default. If you are not using particles, then disabling the distribution can save processing time.
- Render: Render while you animate. This is essentially required.
Image 8. (Currently investigating)
Image 9. Above in the results section of this post, you can see all of the options visualized that are listed here: 2D Maps, PV Diagrams, 3D Mesh, Hydro, Plots (Images) (Note: the numbers to the right indicate LxW size of the image, so these plots are 512x512), Plots (Ascii), Math Variables, and Time Units. You can denote the image type, and indicate where you want the images to be saved (a working directory).
Hypre tolerances
So I ran hypre with varying tolerances for the radiation solve and found that the solution only converged when I set the tolerance to be 1e-10
![]() |
Movie |
So I looked more closely at what the 'tolerance' actually refers to in hypre. The hypre user guide is not much help…
Convergence can be controlled by the number of iterations, as well as various tolerances such as relative residual, preconditioned residual, etc. Like all parameters, reasonable defaults are used. Users are free to change these, though care must be taken. For example, if an iterative method is used as a preconditioner for a Krylov method, a constant number of iterations is usually required.
I'm assuming that hypre is trying to solve Ax=b and instead ends up solving Ax'=b' where r stands for the residual (r=b'-b) and x' is the approximate solution to x. I'm not sure what the _C means or what the <C*b,b> represents… presumably this is the inner product of some matrix 'C' times b with b.
Iters ||r||_C conv.rate ||r||_C/||b||_C ----- ------------ --------- ------------ 1 1.667442e-07 265.427358 3.581554e-08 2 2.421703e-07 1.452347 5.201657e-08 3 2.037404e-07 0.841310 4.376208e-08 4 1.176484e-07 0.577442 2.527008e-08 5 7.646604e-08 0.649954 1.642440e-08 6 4.446094e-08 0.581447 9.549914e-09 7 2.173844e-08 0.488933 4.669272e-09 8 1.033716e-08 0.475525 2.220354e-09 9 5.190075e-09 0.502079 1.114794e-09 10 2.514604e-09 0.484502 5.401202e-10 11 1.291694e-09 0.513677 2.774473e-10 12 6.662719e-10 0.515812 1.431108e-10 13 4.688375e-10 0.703673 1.007032e-10 14 2.451621e-10 0.522915 5.265918e-11 15 1.654131e-10 0.674709 3.552962e-11 16 8.812279e-11 0.532744 1.892819e-11 17 5.478943e-11 0.621740 1.176841e-11 18 7.612170e-11 1.389350 1.635043e-11 19 1.416387e-10 1.860687 3.042304e-11 20 2.346052e-10 1.656364 5.039163e-11 21 4.389153e-10 1.870868 9.427609e-11 22 1.170443e-09 2.666672 2.514034e-10 23 3.069051e-09 2.622127 6.592117e-10 24 3.268667e-09 1.065042 7.020879e-10 25 3.349935e-09 1.024863 7.195436e-10 26 1.136404e-09 0.339232 2.440919e-10 27 2.175246e-10 0.191415 4.672284e-11 28 7.067671e-11 0.324914 1.518089e-11 29 2.139085e-11 0.302658 4.594611e-12 30 7.493659e-12 0.350321 1.609588e-12 31 3.051552e-12 0.407218 6.554531e-13 <C*b,b>: 2.167928e+01
So I looked for an explanation of convergence for linear system solves and found this document
The condition number of a matrix is related to how much a relative error in the rhs affects the relative error in the lhs. ie
which would imply that
In general we have
and
indicates how close to singular a matrix is. (ie a singular matrix would have an infinite condition number)Also because of machine precision, there is an unavoidable error related to the condition number
AstroBEAR solveres
Now we have two solvers available in the code… One is GMres and the other is PCG (preconditioned conjugate gradient). GMres does not converge unless the tolerance is set to ~ 1e-3 - but then it does nothing. PCG apparently expects symmetric positive definite matrices - but the matrices are certainly not symmetric. Not sure how much that matters… I'm also not sure whether PCG is actually doing any preconditioning automatically. I am not setting the preconditioner etc…
Also the actual solution vector 'x' is in terms of the Radiation Energy, and not radiation temperature - and since E ~ T4, there and there is a 3 order range of temperatures - there will be a 12 order range in Energies - and the relative error in the vector norm will be dominated by the larger values… A better metric would be the maximum local relative error…
Meeting update -- Erica
Colliding Flows:
Hypre library none-global-partition worked in fixing memory leaky thing
Sims are progressing along nicely, see page for update, standard out predicting another 8 days at current framerate
BE-paper:
Referee report back. Good comments. Will be working through them and resubmitting this week.
Questions:
- Keto reference?
In-group Education:
I made a page on scaling for Bluestreak
Documentation:
Mini-group report
Outreach:
Horizon's program over the summer — teach underprivileged kids about computational astro. Possible to have them using astrobear???
Mini-group page
Mesh/patches on different numbers of processors
Images of the mesh with patches overlaid for comparison between bluehive (64 procs) and bluestreak (512 procs).
If each patch only contained cells of a given AMR level, then it is easy to see why more procs → more patches → smaller region of highest level refinement (as in bluestreak case) — that is because once a physical region in the grid is marked for refinement (say to highest level-5), then each patch that overlays that flagged region will be forced to contain level 5 cells. The more processors used to do this, the higher the 'resolution' of that level 5 region would be.
The legend on these plots show that patches can contain different levels, but it not clear from these plots if these subsets of the patches (the different levels) must be contiguous.
Would be nicer to see this image colored only to patch number, rather than level + patch.
By intuition, it seems that the physical spaces and hence regions that are flagged for refinement to level 5 should have a 1-1 correspondence between machines. If this isn't the case, as these images suggest, then is the difference attributed to compiler differences or different no. of processors?
Bluehive;
Bluestreak;
Meeting update - Erica, 2/10/14
Group related stuff:
I make a motion to move the group meeting to another day, one less busy with meetings. If this motion is passed, I volunteer to help with organizing another time that works for everyone. The reason I am proposing this switch is that Monday is quite stressful/hectic with all of the meetings - 11:30-12 grad students meet with speaker, 12-1 Pizza Lunch, 130-3 group meeting, 3:30-5 lecture. I personally find it not-optimal to prepare for our Monday meeting under these circumstances. And on another personal note, I think Friday meeting would be best - to wrap up the week with updates, and there are no meetings/classes on this day.
Science Stuff:
Colliding Flows
Am attempting to optimize the wall-time of the runs on the different local clusters. Currently, the 2 different runs I have made decent progress on are the 15 and 30 shear angles cases. These have been restarted and moved around from bluehive to bluestreak to find a sweet spot. Here are some findings.
In 1 day on bluestreak (512 processors), the restart produced 7 new frames.
The same number of frames were produced on bluehive (64 cores, standard queue) but over 3 days.
The restart frame is 32.
Comparing frame 33 on bluehive and bluestreak:
Mesh on Bluehive:
Mesh on Bluestreak:
Rho on BH:
Rho on BS:
Computing
Worked on getting baseline estimate of astrobear -
http://astrobear.pas.rochester.edu/trac/astrobear/wiki/u/erica/memoryBaseline
Found out there are only 96 cores available per user at any given time on bluehive, only 24 hrs allowed on bluestreak and max is 48 hrs with reservation. I asked CIRC about bluestreak, and got this email, thought relevant to share with group:
Erica,
The queue time limits on BlueStreak are to ensure that everyone gets a share of the machine and a little bit to force people to check point their code. Month long run times run the risk of something happening before the job finishes. If the job is not check pointed and restartable, all the run time is lost if the job and or the node crashes. Also, if the job is in an infinite loop the whole month's worth of run time would be lost. So the time limits are short to ensure higher turn over of jobs and good programming practices.
The scheduler is configured to give all users a fair share of time on the BG/Q. So people that have only occasional jobs don't get shut out by people that continually shove lots of jobs into the queue. The share is based on the number of nodes multiplied by the run time of jobs over the past 2 weeks. The fair share and wait time in the queue are the primary factors in setting priorities in the scheduler.
Your jobs have been an exception that I am still trying to figure out. They should be starting in less than a day but they have been taking days to get started. We recently updated to a new version of the scheduler so I would be interested to see if this fixes the problems with the 256 node jobs.
All of this can be overriden with a reservation but there are two problems with that. One is that the smallest reservation we can make is half the machine. So your code would need to scale to 512 nodes or maybe run as multiple jobs in parallel to make use of all the nodes.
The bigger problem is how to justify dedicating half the machine to a single person for a month or two. This is a shared resource for the UR and URMC research communities and we try to be fair in allocating time to all the users.
I hope this explains some of our reasoning behind our policies with regard to the BlueStreak. Feel free to contact us if you still have questions.
Carl
Documentation Stuff:
When do you all want to get together and talk about documentation?
Meeting update - Erica
- Working on understanding memory usage on the big machines, and putting together a proposal for job resources to share with Jonathan (W?)
http://astrobear.pas.rochester.edu/trac/astrobear/wiki/u/erica/memoryBaseline
- Got added to Kraken account, will consider benefits of moving sims there
- Found some interesting papers last week, http://arxiv.org/abs/1401.7648, http://iopscience.iop.org/0004-637X/779/2/121/, http://arxiv.org/abs/1401.6096
Specifically for the last paper, -High resolution sims (~25-45 AU dx) -Radiation transport, MHD, self-gravity, outflow feedback, self-consistent turbulent initial conditions -Sub grid model (Cunningham 2011) for outflow feedback: assigns some fraction of the flow to be ejected from the sink along the sink's angular momentum axis, at some fraction of the keplerian speed. This uses the mass of the sink and a radius of the sink — although not certain the radius they are taking for the sink..
Meeting update - Erica
Colliding Flows
Adjusted Christina's set-up (15-degree shear angle) slightly and sent to Bluehive where I was able to run it on 64 cores to about 30/200 frames. The amount of memory for the simulation is growing with the filling fractions, and the time it is going to take to complete there is > 1 month. I asked her to check the output and verify it looks good on Friday. Will ping her again today. Am ultimately going to have to move this to a bigger machine to cut the time it will take to finish significantly, but am currently running into some issues in compiling on bluestreak, will sit down with Baowei to work on it.
Also, as I have seen in the past, the memory being used by info arrays in the simulation is different than the actual memory usage on the nodes. I am seeing a difference of again ~30 GB over the 8 nodes I am using. When I ran a baseline test for the memory usage of the code (by running very small sim. with no multiphysics), I was getting the base usage as about 1.6 GB per node (or 200 mb per processor >> 17 mb astrobear is on disk), which should account for ~12GB over the nodes. Not sure about the additional memory here being used..
Paper submitted to ApJ and Astroph
Still playing with optimizing refinement
Can do geometric refinement, but when use with refinement object, get wonky behavior. Need to look into this a bit more.
Some documentation updates to the wiki
Updated standard out page
The job sizes page needs a bit more work still, Question - how many aux variables in 2.5 D and 2.5 D with angular momentum?
Also want to update the debugging page to include stack memory issues (definition of stack and the quiet seg fault you see with this type of limit, and also how fortran can run out of stack fast). Is default stack 10 mb on most clusters?, and this is per node right?
Meeting update
- wiki page updates - updated standard out page, and the job sizes page, although the job sizes page needs a bit more work. Question - how many aux variables in 2.5 D and 2.5 D with angular momentum?
- also want to update the debugging page to include stack memory issues (definition of stack and the quiet seg fault you see with this type of limit, and also how fortran can run out of stack fast). Is default stack 10 mb on most clusters?, and this is per node right? Want to also write an explanation of the size of astrobear alone, and how you can do a baseline test to see what you are actually running. Also want to document that the size of info allocations in the standard out only includes the actual info structure, not any other large arrays such a post-processing might use. also want to update the ticket for CF problem.
- I read a ton this past week and need to update my library with papers and such
- I am working on making wiki page for CFs meeting tomorrow
- I emailed christina about using my code, no word back from her yet, emailed her again today to check up on her
- Should I register for hotel accommodations for Santa Barbara conference
- Simulation died with a seg-fault when running 5 levels, but not 4 levels. The TI instability seems to be resolution dependant,…?
- I am seeing more memory usage than expected on some simulations, going to make a note of on a ticket and come back to later
- A good reference for MHD numerical methods?
Sims updates and memory finding on BH.log
My 643 + 5 was running okay, until I got some nans, then the memory exceeded that requested, and bluehive.log gave me an odd error log (check attached). run:/scratch/ekamins2/CollidingFlows/TEST_2013NOV21/TestStackScript/5levels_nans I can restart from the frame before the NaNs, and possibly use a debug version of the code to try for helpful messages in debugging this?
I am also re-running with 4 levels to see how that run does. run:/scratch/ekamins2/CollidingFlows/TEST_2013NOV21/TestStackScript/4levels
At least with 2 levels (completed in ~10 hrs), I do not think I saw these NaNs arising.. run:/scratch/ekamins2/CollidingFlows/TEST_2013NOV21/TestStackScript and files:/grassdata/erica/from_bluehive/CollidingFlows/3DMHDUnlStack
Also found that the memory used as reported in Bluehive.log ~1-2 GB higher than standard out is reporting for some low res test jobs, and that this memory report is different depending on whether I run on 1 or 2 nodes. Perhaps this mismatch between standard out and bluehive.log is due to some overhead in the inter-node, inter-core communication? run:/scratch/ekamins2/CollidingFlows/TEST_2013NOV21/TestBluehivelog
For the high res job (643 + 5), the mem used was reported as 80 GB > 40 GB total I requested, and also > 50 GB standard out is reporting I used. This indicates that perhaps there was an overhead of about 30 GBs… ? That is, maybe the simulation was using 50 GB, which already exceeded the memory request, but with the overhead, it was actually using 80 GB… and finally was killed by the queuing system
This makes it slightly unclear as to how much memory we should be requesting on the cluster if the memory standard out reports /= memory used by cluster..
Meeting update
- Making the stack unlimited fixed the seg faults for the CF runs! !
- Have complete now a 3D, 643 + 2 levels run
- Am running a 643 + 5 levels run now on AFRANK queue
- Completed latest draft for BE paper
Documentation Meeting - 01/25/2013
Last Time
- Jonathan's post from last time: johannjc01092013
User Guide
- Erica's new user's guide: UserGuide
Teams and Leaders
- new group organization: Teams
Blog Categories
- cleaned up some of the redundant/unnecessary blog categories. Using these will be nice for team meetings. Is there a way to have a drop down menu of the most commonly used categories (say top 5) when creating a new post?
Discussion Plugin
- discussion plugin is not working at the moment, but I had also cleaned that up and there is a newer forum (created by Erica) for discussing wiki documentation.
To-Do List
Task | Member |
---|---|
Update objects pages | Eddie |
Update specifics within User Guide | Eddie, Erica |
Update multi-physics pages | Shule, Eddie |
Debug Team Page | Baowei |
Education Team Page | Erica |
Development Team Page | Jonathan |
Organize Developer Guide | Jonathan |
A few wiki documentation things to sort out
- contact us link on Download Page does not work if not logged in
- Instructions for installing AstroBEAR discuss using mercurial but don't mention where to check out the official 'repo' – which would of course only work for local folks anyways…
- AstroBearStandardOut needs to be updated to describe elliptic time usage. It would be helpful to also show some common errors that are seen.
- ModulesOnAstroBear
does not discuss the ProblemBeforeGlobalStep subroutineSimulation data section should not encourage the use of numerical indices for things like px, or Energy as these will be dependent on equation of state, lMHD, nTracers, etc…Dimensions section should be updated since 1D problems are possibleDimensions section should also encourage the use of CellPos function to get (x,y,z from i,j,k)Units and scaling section should mention scales.data file that is dumped by codeAlso worth mentioning that value of info%q should technically be average of physical solution over cell… Motivation for sub-cycling/smoothing etc…Also perhaps setting up a clump would be a good example - since you could then show how to rewrite the problem module using the ambient and clumps objects.Units and scaling section should not encourage the changing of physical scalesInitializing a Grid says that each infodef has a %dx which is not trueInitializing a grid has an incorrect expression for the number of ghost zonesFlagging cells for refinement implies that the errflag array extends into ghost zones – which it doesn'tAdditionalPhysics should probably be a separate pageAdditionalPhysics says that MaintainAuxArrays needs to be set to true which is no longer correct- AdditionalPhysics should discuss equation of state options as well.
Aux fields only need to be initialized if lMaintainAuxArrays is true (in 1D MHD it is false)Also an example that shows how to calculate the location of aux field entries would be good – as well as an example showing how to initialize a divergence free auxfield using a vector potential.- It would be nice to stop storing the Cooling Objects in the chombo file – which would simplify the problem module.
- Not sure if selfgravity still has a problem with a uniform density field… this should be checked – and it would only be a problem with peridiodic BC's for the potential – also a discussion (or link to a discussion) about what needs to bet set in global.data for self-gravity to work.
- I believe you can have sink particles without self-gravity (ie just point gravity)
- This should also be updated with viscosity/resistivity/thermal conduction stuff…
- AstroBearObjects
- Disks need to be added to Initial conditions documentation
- VectorPerturbations needs to be added to the Sub-Objects and documented
- All of the individual object pages need to be (created) or updated to reflect changes made by Ivan and others
- ScramblerIntro
- Need to decide what to do about refinement objects etc…
- How to setup a properly posed problem should probably be linked from somewhere else…
- DataFileTutorial
- io.data, communication.data, modules.data, and process.data are depracated
Data File Tutorials should be moved inside of ScramblerIntro and the 'Getting Started with AstroBEAR' should be renamed to 'AstroBEAR user's guide' or something… and linked below the Download link on the main wiki page.Then Tutorials will have information on things besides AstroBEAR - though the visit tutorial could be linked from the visualization page.The link to Code on the main wiki page is essentially a link to a developers guideMeeting pages are depracatedCollaborators & Projects, Publications, and Image Gallery probably don't belong under 'NewUsers'
Updates to Building Astrobear tutorial
I've made some updates to the Building Astrobear tutorial to reflect the system changes brought on by the Great Crash:
- Bluehive and Bluegene now sport reminders to set up a VPN connection, as well as links to the University IT page that explains how to do so.
- Grass's build instructions have been updated with the new module list, as well as how to enable modules in your
bash
environment. - A new section has been added for
alfalfa
build instructions. This section looks a lot like grass, except that it also has a VPN reminder.
Meeting Update for week of July 10, 2011
Alfalfa, the new grass-style machine, is now <palpatine_voice>fully operational</palpatine_voice>. It can be reached via ssh:
ssh <username>@alfalfa.pas.rochester.edu
Users still need a VPN to connect to alfalfa from off-campus, but no VPN is needed to connect from another machine in the PAS domain.
All of the modules accessible on grass
and clover
are accessible on alfalfa
, as is ifort
. AstroBEAR's Makefile.inc.grass
seems to work for alfalfa
as well.
It turns out we lost email notifications when the wiki was brought back online, due to changes in the department's SMTP protocol configurations. This problem has since been fixed, and users are receiving emails about tickets again. We are still working to restore notifications about repository checkins.
I've created a page about scaling in AstroBEAR; it's still pretty rudimentary, but it's at least RFC-ready at this point. This is part of my effort to consolidate some of the tags in our wiki, and eliminating the 'attention-needed' category. Most of other 'attention-needed' pages were based on AstroBEAR 1.0 specs, so I moved them over to 'deprecated'.
Finally, an open question for the group: what should go on the visualization page? Most of the visualization tutorials are on the [Tutorials tutorials] page; should they be linked on the visualization page as well? Is there something else entirely that people would like to see there?
Meeting Update for week of July 3, 2011
You've probably already noticed it, but the wiki's front page has been reorganized a bit.
Also, references to /var/trac and /var/repositories have been replaced with /data/trac and /data/repositories, respectively. The two exceptions are:
- Documentation pertaining to AstroBEAR 1.0.
- Tickets created prior to the Great Crash of '11.
I'm not sure whether it's worth altering AstroBEAR 1.0 documentation, and I thought I'd get the group's opinion before I start tampering with comments on old tickets.
In other news, the data file tutorials page is now completely filled out. Tutorials now exist for all the major .data
files and most of the minor ones (problem.data
being the exception).
And in other other news, alfalfa
, our new grass
-style machine, is online. It doesn't have an Intel compiler installed yet, but it's another 2 TB of storage for people who need it. It is currently only accessible from on campus, unless a VPN is used. A subsequent blog post will have more details.