Posts for the month of January 2012

Binary wind capture and accretion diks formation 2012

There was a meeting on Fri the 27th Jan '12. There we though about the relevant parameters for the binary simulation again.

Characteristic parameters:

  • a, the separation of the stars, which we want to keep 20AU.
  • AGB gravity (?)
  • AGB wind temperature, Tw
  • AGB wind speed, vw, should be ~ 20km/s
  • q=Mprimary/Msecondary
  • gamma, which for AGB winds should be close to 1
  • sigmas= gravity softening radius of the secondary / grid resolution = rsoft/ dx
  • sigmaB= gravity softening radius of the secondary / Bondi accretion radius = rsoft/rB , where rB=2GMsecondary/(vw 2 + cs 2 + vsecondary 2), cs is the AGB wind's sound speed and vsecondary is the orbital velocity of the secondary with respect to the center of mass, located at the origin.

Test 1: Binaries

  • Isothermal solver. It's the first time I use it for this problem
  • q= 1.5, as before
  • a=20AU
  • Tw=1000K
  • sigmas=4cells
  • sigmaB=10, i.e. 40 cells per rB. This is accomplished with a grid of 83 computational units with a grid of 323+5particle refinement levels.
  • vw=20km/s
  • tfinal=5 orbits.

Progress:

29 Jan, 14:38. Test 1 started running on Sat the 28th morning in bluehive. I've seen some high cfl number reports after the secondary's gravity is turned on, which significantly decrease the timestep and slow the run. I've been (i) restarting with smaller cfl numbers, (ii) trying to progressively increase the secondary's gravity. The secondary has completed about .5 orbits so far. At this point, the gas that has been captured by the secondary star does not look like a disk:http://www.pas.rochester.edu/~martinhe/29jan1457.png

I'm monitoring the progress and have 2 instances of the problem running.

30 Jan 8:15. Test1 is still running (see image), t=.8orbit. I've been finding high cfl reports which sometimes freeze the code and sometimes significantly reduce the timestep. I'm looking into the part of the code dealing with this, but in the mean time I've been running with cfls .1-.3, so the simulation is going slow. Another instance of the problem will start running in bluegene today (at some point).

The upper right panel shows a zoom of the left panel. The bottom right panel show a zoom with velocity vectors. The flow does not look like a keplerian one at this point. Seems to be early to judge whether this is correct or not.http://www.pas.rochester.edu/~martinhe/30jan815.png
31jan12 9:17am. InterpOrder=2. The flow does not look like a disk but we're far from 5 orbits. Compare the image to the right with the one of the 29th Jan (two above) to see the differences with the interpOrder=3 case. It's going very slow for the reasons reported in ticker167 (https://clover.pas.rochester.edu/trac/astrobear/ticket/167) which I'm working on. In the meantime I've been running (in bluehive and have another instance of the sim waiting in bluegen's queue) with small cfls ~.09-.1. http://www.pas.rochester.edu/~martinhe/2011/binary/31jan12.1105.png
1 Feb '12 7:55am. InterpOrder=2 continues, time=.8orbit. The flow about the secondary looks more uniform than before (see the zoomed in image →)http://www.pas.rochester.edu/~martinhe/2011/binary/1feb12-755.png

7 feb Running well in bluehive, 64 afrank procs. I've reduced one amr level so we can get data asap. This run includes some fixes and solver parameters that we've been discussing.

If test 1 fails (i.e. it produces a tilted or an amorphous disk) then in test 1.1 I will increase sigmaB for a fixed sigmas. If test 1.1 fails (ditto) then in test 1.2 I will increase sigmas and will keep everything else fixed.

Test 2: Disk at the centre of the grid

  • Isothermal solver. Its the first time I will use it for this problem.
  • Disk to ambient density contrast of 106
  • sigmas=4cells
  • extrapolated BC.

Progress.

30 Jan 8:22. Test 2 has completed 4 orbits (see images). The central part of the disk shows an inclination with respect to the orbital plane of ~ 30o, from t=1orbit on. I do not yet understand why this happens. The outer parts of the disk remain fairly axisymmetric though. These conditions do not vary too much in time. The grid is 323 +3amr, with a disk radius of 1 comp units.

~1 orbitshttp://www.pas.rochester.edu/~martinhe/30jan840a.png
~2 orbitshttp://www.pas.rochester.edu/~martinhe/30jan840b.png
~3 orbitshttp://www.pas.rochester.edu/~martinhe/30jan840c.png
~4 orbitshttp://www.pas.rochester.edu/~martinhe/30jan840d.png

Below is the early velocity field evolution, superimposed on the logarithmic gray scale of the density. Zoom in:

edge-on view | pole-on view
http://www.pas.rochester.edu/~martinhe/30jan1050a.png
http://www.pas.rochester.edu/~martinhe/30jan1050b.png
http://www.pas.rochester.edu/~martinhe/30jan1050c.png
http://www.pas.rochester.edu/~martinhe/30jan1050d.png

The grid seems small for the problem.http://www.pas.rochester.edu/~martinhe/30jan1103.png

31jan12 9:17am. Here's a movie of test 2 (amr+interpOrder=3) showing the disk plane velocity field superimposed on the log(density). cs=1. Arrows have a fixed length and are color coded.

http://www.pas.rochester.edu/~martinhe/2011/binary/31jan12a.gif

I see: a keplerian-like vel distribution at t=0. Then there's a fast radial expansion which, I think, results from the initial pressure gradient between the disk and the amb. Would this, or a similar process, happen for IC with a disk-to-amb density contrast=1 and a super Keplerian disk vel distribution (in that case there should be no poloidal expansion though)? Then the flow still shows a velocity that scales down with r and the density seems to show some spiral-like pattern. The right panel (zoom) shows gas located well within the Bondi radius. The initial 'red' central vels seem to quickly disappear and some of the green ones do too -the central disk seems to decelerate? After such vel change, the distribution of the central disk seems to change modestly.

31jan12 10:43am. Here's a movie of test 2.1 (fixed grid+interpOrder=2). Everything is as in the movie of test 2 (above).

http://www.pas.rochester.edu/~martinhe/2011/binary/31jan12b.gif

Group research meeting. After discussing the results of tests 2 and 2.1 we agreed I'll try:

  • a colder disk, t=.1K, instead of 1000K
  • taller cylindrical (not flared) disk, such that the disk height is ~ 2rsoft (it was ~.8 rsoft)
  • more time resolution
  • timefinal=2orbits, for the warping should happen in this time.

If the above still produce warping:

  • phi=pi/2, i.e. make the disk rotate about the y axis, instead of the z axis to catch potential grid related bugs
  • hydrostatic disk, with no ambient medium.

Find our previous research at:https://clover.pas.rochester.edu/trac/astrobear/blog/binary

Interpolation Verification

After seeing strange behavior - especially wrt pressure I decided to check that the interpolation methods used in prolongation were accurate. It is a little difficult to visualize this but it is possible.

  • Start a run in fixed grid.
  • Select the frame you want to see interpolated.
  • Set the restart_frame to be that frame in global.data
  • Set the final frame to be 1 frame later.
  • Now the trick is to get the time between those two frames to be very small, so that there is no temporal development. To do this…
    • Get the time stamp of the restart frame (restart_time = restart_frame/final_frame*final_time) assuming start_time was originally 0
    • Set the start_time to be restart_time-(restart_frame*epsilon)
    • Set the final_time to be restart_time+epsilon
  • Then increase maxlevel to whatever desired level and set qtolerances to trigger refinement.
  • Then restart the simulation
  • It should take one root level step and then output a single frame. When viewed it should show the interpolated values.

Here are images from an example where 4 levels of refinement were added to a BE sphere profile.

Constant (0) MinMod (1) SuperBee (2)
VanLeer (3) Monotone Centered (4) Parabolic* (5)
Linear (6)

The "plateaus" in all but the linear and the parabolic are due to the slope being limited so as not to introduce any new extrema. The Parabolic does not conserve the total quantity and is only used for the gas potential (and time deriv). The 'linear' was just added but could likely have issues near shocks since it is not limited. But all seem to be working 'correctly'. There is a shortcoming in that each limiter is considered in each direction independently. A more multidimensional reconstruction that used a multidimensional limiter would ease the plateaus from persisting along the coordinate directions.

Issues with Self gravity

For a fixed grid run we need phi before and after every advance. So for 4 time steps, we would need to solve for phi 5 times. With no additional elliptic solves we can then use phi before and after to achieve second order time accuracy.

For an AMR patch we need to take 2 time steps so we need 3 solutions for phi. We could use the prolongated parents version of phi, or the previous overlapping grids latest version of phi. In principal they should agree at the coarse boundaries, since the previous grids solution to phi was subject to the same boundary conditions (due to phidot at coarse boundaries exactly agreeing with parent grids). Furthermore the previous grids solution to phi should be more accurate since it accounts for the distribution of gas on the current level. Otherwise level n would end up using values for phigas that were successively prolongated from level 0. So it is important to use the overlapped version of phigas and not the prolongated.

However, phidot should not be copied from old grids since that phidot is from a previous time step. This can cause problems if phidot changes quickly. There will also be a mismatch between the prolongated and overlapped phidot which can cause differences between adjacent cells leading to strong gradients. So it is important to use the prolongated value of phidot and not the overlapped. Additionally since phidot is recalculated before successive prolongations, the same issue with repeated prolongation does not occur.

This is implemented in the code by setting

ProlongateFields = (/…,iPhiGas,iPhiDot,…/)

and by setting EGCopyFields = (/iPhiGas/)

And then following a poisson update ghosting iPhiDot.

To make it 2nd order we need to ghost iPhiGas as well after the poisson update, however this means that we don't need to copy iPhiGas before the 2nd step.

Some binary wind capture accretion diks formation related papers

Shape

For those of you interested in Shape and its plotting/modeling capabilities:

http://bufadora.astrosen.unam.mx/shape/v4/screenshots/grid/screenshots.html

Meeting Update 01.24

Been working on the flux limiting explicit thermal conduction and the presentations.

see separate project pages "Magnetized Clumps" and "Magnetized Clumps with Shocks".

AstroBEAR Virtual Memory

AstroBEAR uses huge virtual memory (comparing with data and text memory) when running with mulpi-processors:

One Processor:

http://www.pas.rochester.edu/~bliu/Jan_24_2012/AstroBEAR/bear_1n1p1t.png

Four Processor: http://www.pas.rochester.edu/~bliu/Jan_24_2012/AstroBEAR/bear_2n8p4t.png

To understand the problem, I tried a very simple Hello World program. Here are the results from TotalView:

One Processor: http://www.pas.rochester.edu/~bliu/Jan_24_2012/HelloWorld/1n1p1t.png

Four Processor: http://www.pas.rochester.edu/~bliu/Jan_24_2012/HelloWorld/1n4p4t.png

It's fair to say that the big virtual memory issue is not related to the AstroBEAR code. It's more related to openMPI and the system. I saw online resources arguing Virtual Memory includes memory for shared libraries which depends on other processes running. It makes sense to me. Especially I ran the Hello World program with same setup but at different times and found out it's using different virtual memories

http://www.pas.rochester.edu/~bliu/Jan_24_2012/HelloWorld/1n1p1t_2ndRun.png http://www.pas.rochester.edu/~bliu/Jan_24_2012/HelloWorld/1n1p1t_3run.png

I'm reading more on virtual memory and shared libraries.

Meeting Update 01/24/2012 - Eddie

  • Google calendar up and running. So far it only has the Astro lunches and colloquia, our weekly group meeting, PAS colloquia, and journal club. Any one in our group can create/manage the events on the calendar. What else do we want on this calendar? The AST462 course (not really a group thing, but it does affect Adam, Erica, and I), other weekly meetings, upcoming conferences? Leave comments! Let's use this blog as a discussion as it was originally intended!
  • Speaking of blog comments, did anyone ever look into email notifications, or do we even want them? It seems that there is a plugin for this: http://trac-hacks.org/wiki/FullBlogNotificationPlugin. Should I take a stab at this or leave it for someone who is more familiar with trac and how plugins are handled?
  • Zcooling routines have been implemented. I'm now in the process of testing them with my 1DJet simulation. I'll try out Jonathan's new 1D implementation as well. The documentation on Zcooling is coming along. See zcooling for more details. The table reformatting stuff will be important for anyone who gets this code in the future. The section on the interpolation that I used will be added soon.
  • I also started documentation on the Wind object, since I made some changes to it in order to use it in my 1DJet module. See the wiki page WindObjects.

UPDATE : Jonathan did look into email notification for blog comments a while ago. The Announcer plugin is supposed to support email notification for blogs in the same way it does for tickets. However, Jonathan could not get this to work. I looked into it today as well and found the same things that Jonathan did. The solution is to use the separate FullBlogNotification plugin. This is a standalone plugin that does its own notification without going through Announcer.

The way notifications work is the wiki sends an email to username@pas.rochester.edu. Announcer allows each user to specify an email address in their preferences on the wiki if they don't want to use this pas address. In short, this feature will not work for blog comments.

Vx lineouts of CRL618

Clump, stratified ambient density:

movie50 yr100yr146yr
http://www.pas.rochester.edu/~blin/jan2012crl618/vx_clump-strat.gifhttp://www.pas.rochester.edu/~blin/jan2012crl618/vx_clump-strat0055.jpeghttp://www.pas.rochester.edu/~blin/jan2012crl618/vx_clump-strat0110.jpeghttp://www.pas.rochester.edu/~blin/jan2012crl618/vx_clump-strat0160.jpeg

Clump, non-stratified ambient density:

movie50 yr100yr146yr
http://www.pas.rochester.edu/~blin/jan2012crl618/vx_clump-nostrat.gifhttp://www.pas.rochester.edu/~blin/jan2012crl618/vx_clump-nostrat0055.jpeghttp://www.pas.rochester.edu/~blin/jan2012crl618/vx_clump-nostrat0110.jpeghttp://www.pas.rochester.edu/~blin/jan2012crl618/vx_clump-nostrat0160.jpeg

Jet, stratified ambient density:

movie50 yr100yr146yr
http://www.pas.rochester.edu/~blin/jan2012crl618/vx_jet-strat.gifhttp://www.pas.rochester.edu/~blin/jan2012crl618/vx_jet-strat0055.jpeghttp://www.pas.rochester.edu/~blin/jan2012crl618/vx_jet-strat0110.jpeghttp://www.pas.rochester.edu/~blin/jan2012crl618/vx_jet-strat0160.jpeg

Jet, non-stratified ambient density:

movie50 yr100yr146yr
http://www.pas.rochester.edu/~blin/jan2012crl618/vx_jet-nostrat.gifhttp://www.pas.rochester.edu/~blin/jan2012crl618/vx_jet-nostrat0055.jpeghttp://www.pas.rochester.edu/~blin/jan2012crl618/vx_jet-nostrat0110.jpeghttp://www.pas.rochester.edu/~blin/jan2012crl618/vx_jet-nostrat0160.jpeg

Simpler Self gravity implementation seems to work

So, I gave up on trying to integrate the potential due to particles with that due to the gas to get a better gas potential at coarse fine boundaries. Now, only and are stored in q. Gas that is accreted during a finest level step will not be reflected in the boundary values to the gas potential at the coarse boundaries, until after the coarser grid updates. Fortunately, since the accretion is incremental this should not pose a significant problem. Additionally if the accretion is steady, then the time derivative of the gas potential should include this effect. I have performed a suite of 2D simulations with various levels of refinement and all seem to handle the collapse nicely.

movie

And the 1 3D TrueLove problem I ran also seems to handle the collapse well. In the absence of accretion or particles, the scheme should be second order in time - although I have yet to verify this.

Also, it was necessary to reduce the softening length to 1 cell, instead of 4 cells, since the particle would form at a cell corner, but the gas would collect in a cell center. Due to the softening the gas cell would exert a greater pull than the particle which would cause the gas to wonder away from the particle.

2D Bonnor Ebert Circle for quick and dirty runs ready

The output is at /alfalfadata/erica/PROJECTS/BEsphere2D/out and the executable was made from: /alfalfadata/erica/DEVEL/BEsphere2D. Mainly the modification in the code had to do with moving the clump object off of the default center position of the box (0,0,0). Adding the clump param, xloc, to the problem module allows one to read in the location of the clump. I set it to be at 2,2,2. To make sure the bounds in global.data were set such that the sphere was sliced down the middle, the domain was set to run from (0,0,2) to (4,4,2). That is a slice in the xy plane at z=2, eg. a slice down the center of the sphere at xloc = 2,2,2.

CRL 618

4 feb '12.

Three runs from the table below have completed. Bin is in the process of producing movies and plots:

https://clover.pas.rochester.edu/trac/astrobear/blog/blin02052012

Four of the new runs (see table below) are waiting in bluegene's queue. We should have them -along with plots, diagrams, etc.- in a week time, before Adam's visit to Bruce. Additionally, Bin has 2 of these sims in the standard queue at bluehive. Note that the grid is slightly larger than before so we can see the edges of the object. Also, I have reduced 1 amr level because it adds time to the runs and post processing and doesn't give us significant info about the object's dynamics. Bruce and I have discussed about the separation of the ambient rings. The setup in runs 4, 5 and 8 (see table) seems to be consistent with the obs.


Telecon with Bruce, Adam and Jason on 26 jan '12.

  • We don't need high resolution (256x128x128+2amr) to compare with the observations, which is the main objective of the paper. We will drop 1, 2 may be, refinement levels which should decrease the simulation production time.
  • We don't need a constant ambient density model
  • We'll do velocity and density line-outs along the jet-axis at y={-2/4,-¼,0,¼,2/4} Rclump
  • We'll do PV diagrams with a slit displaced Rclump/2 from the symmetry axis
  • Separation of the ambient rings form observations: ~1e16 cm = 0.9kpc*3e21cm/kpc/206265. We're using a separatin that is twice as long in the models for some experimentation showed the rings to be too close otherwise.

New simulations:

Model nameRunning orderRunning schedulesNo. clumpsJetDens contrast with ambientAmbientResolution
a127 Jan. Completed10200 (as the previous runs)r-2+rings128x80x80+2amrhttp://www.pas.rochester.edu/~blin/feb2012/clump-strat0110.jpeg
b227 Jan. Completed01200r-2+ringsdittohttp://www.pas.rochester.edu/~blin/feb2012/jet-strat0110.jpeg
c328 Jan. Completed1050r-2150x100x100+2amrhttp://www.pas.rochester.edu/~martinhe/2011/5feb12-515.png
d428 Jan Completed1050r-2+ringsdittohttp://www.pas.rochester.edu/~martinhe/2011/4feb1420.png
e530 Jan Running0150r-2ditto
f631 Jan Running0150r-2+ringsditto
g731 Jan Queued2; one at the initial condition and another after the 1st one has left the grid050,50r-2ditto
h81 Feb Queued2; one at the initial condition and another after the 1st one has left the grid050,50r-2+ringsditto

New synthetic diagrams along the correct slit position angle (turns out you cannot turn the slit in shape. The PV diagrams that I posted on Saturday correspond to a slit position angle of 0 degrees, but we want this to be 90deg. Thus I've rotated the objects with respect to the slit and had used more data from the simulations in order to see bow shocks).

Notes:

  • Clump/jet are shown going up because shape's slit is like that (there's no way around this in shape. I could rotate the synthetic images later on (with gimp) for the paper versions).
  • Gray scales in the left column (density maps) are logarithmic and have the same limits for all images.
  • Gray scales in the emission columns (2 and 4) are logarithmic and do not have the same limits.
  • Gray scales in the pv columns (3 and 5) are linear and do not have the same limits.

Clump, stratified ambient density:

Log(dens) [cm-3] Emission (dens2) 90oPV 90oEmission (dens2) 60oPV 60o
http://www.pas.rochester.edu/~martinhe/c-s-dens0001.jpeghttp://www.pas.rochester.edu/~martinhe/c-s-emiss-90.jpghttp://www.pas.rochester.edu/~martinhe/c-s-pv-90.jpghttp://www.pas.rochester.edu/~martinhe/c-s-emiss-60.jpghttp://www.pas.rochester.edu/~martinhe/c-s-pv-60.jpg

Clump, constant ambient density:

Log(dens) [cm-3] Emission (dens2) 90oPV 90oEmission (dens2) 60oPV 60o
http://www.pas.rochester.edu/~martinhe/c-n-dens0000.jpeghttp://www.pas.rochester.edu/~martinhe/c-n-emiss-90.jpghttp://www.pas.rochester.edu/~martinhe/c-n-pv-90.jpghttp://www.pas.rochester.edu/~martinhe/c-n-emiss-60.jpghttp://www.pas.rochester.edu/~martinhe/c-n-pv-60.jpg

Jet, stratified ambient density:

Log(dens) [cm-3] Emission (dens2) 90oPV 90oEmission (dens2) 60oPV 60o
http://www.pas.rochester.edu/~martinhe/j-s-dens0000.jpeghttp://www.pas.rochester.edu/~martinhe/j-s-emiss-90.jpghttp://www.pas.rochester.edu/~martinhe/j-s-pv-90.jpghttp://www.pas.rochester.edu/~martinhe/j-s-emiss-60.jpghttp://www.pas.rochester.edu/~martinhe/j-s-pv-60.jpg

Jet, constant ambient density:

Log(dens) [cm-3] Emission (dens2) 90oPV 90oEmission (dens2) 60oPV 60o
http://www.pas.rochester.edu/~martinhe/j-n-dens0000.jpeghttp://www.pas.rochester.edu/~martinhe/j-n-emiss-90.jpghttp://www.pas.rochester.edu/~martinhe/j-n-pv-90.jpghttp://www.pas.rochester.edu/~martinhe/j-n-emiss-60.jpghttp://www.pas.rochester.edu/~martinhe/j-n-pv-60.jpg

Some notes comparing Dennis et al. 2008 with our sims.

-His clump/jet vels are 100m/s. Ours are 400km/s.

-His emission and pv diagrams are for 2.5D so that's why they have more resolution, but his 3d emission maps have a little less resolution than our shape ones.

-His Fig8 is the average vx. Our vx lineouts are not averaged; they are along a single line n the x direction.


See:

https://clover.pas.rochester.edu/trac/astrobear/blog/blin01162012

and subsequent comments.

New ways to be notified of changes to the wiki

I installed a couple of plugins so now…

  • Blog authors will be e-mailed when someone comments on their blog posts (although this e-mail can only be sent to their username@pas.rochester.edu)
  • Wiki pages now have a 'Watch This' link in the upper right corner. If you select to watch a page, you will receive e-mails whenever the page is modified.
  • Under the Preferences menu (right next to logout), their is an announcement tab where you can adjust various settings related to ticket notifications. (I would suggest folks check all of the boxes under "ticket component subscriptions" if they want to stay on top of all tickets)

Resources

Some sample numbers for 3D runs

  • 323+4 = 5123 effective with a high filling fraction
    • 2.2 GB / frame x 100 frames = .2 TB
    • 64 procs @ 1.2 GB of mem / processor = 76 GB of active memory
    • 12800 SU's
  • 323+5 = 10243 effective with a smaller filling fraction
    • 1.5 GB / frame x 100 frames = .15 TB
    • 64 procs @ 1 GB of mem / processor = 64 GB of active memory
    • 19200 SU's
  • 323+5 = 10243 effective with a high filling fraction (projected)
    • 12 GB / frame x 100 frames = 1.2 TB
    • 512 procs @ 1 GB of mem / processor = 512 GB of active memory
    • 102400 SU's

Projected needs for research

  • 6-10 runs for colliding flows
  • 6-10 runs for gravo collapse
  • 12-20 runs at 19200 SU's = 230400 - 384000 SU's
  • 12-20 runs at .4 TB = 4.8 - 8 TB of data

This is effectively the entire 192 infiniband nodes for 50-80 days or all of bluegene for 1 to 2 weeks (assuming memory issues are not a problem). If we had exclusive access to a quarter of bluegene, we would be able to finish these runs in 1 to 2 months.

Projected needs for groups research

This is of course, just for the research I would like to do in the next few months. When you consider the rest of the group we are looking at maybe 3-4 times that?

As far as storage space is concerned, we currently have 21 TB of storage space of which 5 TB is available. If each grad student/post doc has 5-8 TB of active data that needs to be stored until a paper is published, then we would need at least 30 TB. (That's assuming folks get rid of their old data as soon as the results are published). At some point (when we have the new bluegene machine) we will be generating too much raw data to effectively store and we'll have to figure out how to generate useful output (ie spectra, column density, only coarse grids, etc…)

Odd, higher res BE movie

Here are movies of a higher resolution BE run:

http://www.pas.rochester.edu/~erica/upperboundBE.gif — upper bound on color bar

http://www.pas.rochester.edu/~erica/noboundBE.gif — no bound on color bar

http://www.pas.rochester.edu/~erica/reg1182012.gif — initial bounds on color bar

  • 863 root grid +3 levels
  • RefineVariableFactor set to 0 to trigger refinement based on Jeans length
  • Ambient medium density = density at outer edge of sphere, entire domain is isothermal
  • boundary conditions = all velocities into and out of grid are stepped on
  • Poisson BC's = multipole

I killed the run after the diamond shape happened. We have seen this before and contributed it to the pressure waves running off corners of box. However, the disconcerting asymmetry at the last frame is new..

I am going to move the edges of box further away from the sphere and change the boundary conditions to periodic to mimic the sphere sitting in a homogenous, 'infinite' medium. .

========================================================

863 base grid + 5 levels, periodic BC's on box:

Meeting Update 01/17/2012 - Eddie

I have a different simulation set up that is now more suitable for what we will want for the HST proposal and further research. It is effectively a 1D pulsed jet running through an ambient medium. The jet is pulsed via velocity perturbations. I added perturbation parameters to the wind object, so anyone that wants to simulate something with a pulsed inflow might be able to use that. It currently supports square wave and sine wave perturbations. I plan on adding a random perturbation option as well. The 1D jet simulation also has a magnetic field perpendicular to the propagation, and cooling can be turned on. Here's an image of what I have so far on a run with no cooling:

I've been reading up on interpolation schemes. Tomorrow I will dig a little deeper into the cooling routines, and really start making progress on adding Pat's cooling tables. Hopefully, next week we'll have this new form of cooling implemented and the aforementioned simulations running.

Meeting update 1/17/2012

Particle Kicks: http://www.pas.rochester.edu/~erica/SinksUpdate1172012.html Both gas-gas and particle-gas conservation of momentum have been implemented. The kick that was present before is no longer present.

Bonnor Ebert: Working on simulations. Would like to think more about the system I am trying to model and the model I am trying to create for that system. I would like to choose appropriate boundary conditions and wonder whether I should be initializing a stable BE sphere or an unstable one when I run the tests I emailed Phil about.

Here is a heavy run: http://www.pas.rochester.edu/~erica/BEupdate1172012.html. I am getting HIGH cfls and the totals.dat file is wonky..

Working on getting a 2D version of the BE problem up and running.

Meeting Update 01.17

I'll talk about my poster and the AAS meeting. Currently working on the paper of magnetized clump with thermal conduction. I'd also like to discuss what we will present at the HEDLA meeting, coming Apr 30th.

17 Jan 2012 update

The AAS meeting went quite well. I've got positive feedback about my talk (http://www.pas.rochester.edu/~martinhe/austin.pdf) and chatted with most of the gang in the PN paper and with Dongsu Ryu (who invited me to give a talk in South Korea). Everyone sent greetings to Adam.

I've sent the abstract and registered for the HEDLA meeting. Adam, Eric and I should meet soon so discuss the new tower runs.

The four CRL618 runs have finished. Bin (see her post) is making movies. Do we want synthetic emission and PV diagrams of these runs?

I'm updating the PN paper. The new version will be submitted to MNRAS. I'm almost done. I should have it ready for Adam's and Eric's revision by Wednesday morning.

I'm writing the very last part of the magnetic tower paper. This is section 3.3, Energy fluxes, where we compare our calculations of the Poynting to kinetic energy ratio with those of of other flows, as requested by Pat. BTW, see http://arxiv.org/abs/1201.2681

Latest CRL618

Meeting update

Happy New Year all.

I am working on setting up new BE stuff to begin a collaboration with Prof. Phil Myers - reading his latest papers and reviewing old studies. I am also checking on the status of the conservation of momentum added to the sink algorithms.

Erica

P.O.I:

http://www.pas.rochester.edu/~erica/SinksUpdate.html http://www.pas.rochester.edu/~erica/comparison.gif

Meeting Update 01/10/2012 - Eddie

The nonequilibrium cooling routines are coming along. This has also been called the modified Dalgarno-McCray cooling. The files have been ported over from astrobear 1, and the necessary changes have been made to make it work in astrobear 2.

I used Bin's radiative instability module as a quick check to see if NEQ cooling was working. You can tell from the output that it is definitely cooling, and it is cooling more than DM cooling. I started a new wiki page on this: u/neqcooling. So far this page just contains some visit plots from the radiative instability module.

I am now working with a different module to just run a shock through an ambient material. This set up will make it easier to see the post-shock material, because the radiative instability module will often produce oscillations. After I determine that this is accurate, I can start working on the 6th type of cooling which will include Pat's tables, which he has already sent to me.

Working on implementing conservative momentum conservation in 1D with self gravity

Need to:

  • Remove self gravity source term from source module
  • Add self gravity source terms to reconstructed left and right states
  • modify final fluxes to include self gravity source terms.

Here's a plot showing momentum conservation with the new approach

However there were striations

These were due to the approximation it uses to calculate the density… Turns out when the errors in phi are large compared to the density this can cause these striations.

Here's a plot showing the relative error in the derived rho used in the source calculation and the actual rho.

Reducing hypres error tolerance from 1e-4 to 1e-8 improved the situation and lessend the striations

Finally here's a comparison of the old and new methods

movie

  • Modify gravitational energy source terms.
  • These modified fluxes will need to be stored in the fixup fluxes so they can be properly differenced between levels to ensure strict momentum conservation.
    • this will require a second call to store fixup fluxes. Perhaps a generic routine in data declarations would be better for storing fixup fluxes, and emfs, etc…
  • extend same modifications to 2D and 3D by adding additional source terms in transverse flux update.

What to do about phi

  • To be second order we need the gas potential before and after a hydro step.
  • Normally the gas potential after a hydro step would be the same as that before the next hydro step (requiring 1 poisson solve per hydro step) however with accretion, the gas in each cell (and the corresponding gas potential) can change.
  • Accretion, however, should not change the total potential (except for differences in softening) - so recalculating the particle potential should allow for modifying the gas potential without another poisson solve.

So a fixed grid algorithm would look like the following:

  • Do a hydro step using original gas potential and predicted time centered particle potential.
  • calculate new gas potential (poisson solve) and ghost
  • momentum and energy flux correction
  • Advance particles using back force from gas
  • Calculate new sink potential
  • Store new total potential
  • Perform accretion
  • Update new sink potential using new particle masses and difference gas potential keeping total potential fixed
  • Repeat

For AMR we need a way to update the gas potential on level l at coarse grid boundaries independent of a level l poisson solve. This can be done using phi and phidot. Then whenever we set the gas potential for the poisson solve we use phi, phidot, and phisinks. So the algorithm looks like the following:

Root Level

  • Do a hydro step using original gas potential and predicted time centered particle potential.
  • Calculate new sink potential using predicted particle positions
  • calculate new gas potential (poisson solve) and ghost
  • momentum and energy flux correction using phi_Gas and phi_gas_old (stored in phi)
  • Update total potential and time deriv using sink potential and gas potential. phi_new=phi_gas+phi_sinks phi_dot=phi_new-phi+old_phi_sinks
  • Prolongate old fields and new total potential time deriv
    • After finer level steps, particle positions and masses will be different. So update phisinks as well as phigas keeping phi constant.

Intermediate Level

  • Do a hydro step using original gas potential and predicted time centered particle potential.
  • Calculate new sink potential using predicted particle positions
  • Update gas potential at coarse fine boundaries using phi, phidot, and predicted phi sinks
  • calculate new gas potential (poisson solve) and ghost
  • momentum and energy flux correction
  • Update total potential and time deriv using sink potential and gas potential.
  • Prolongate old fields and new total potential time deriv
  • After finer level steps, particle positions and masses will be different. So update phisinks as well as phigas keeping phi constant.
  • Repeat

Finest Level

  • Check for new particles (do this after ghosting)
  • Perform accretion
  • After accretion, particle positions and masses will be different. So update phisinks as well as phigas keeping phi constant.
  • Do a hydro step using gas potential and predicted time centered particle forces.
  • Advance particles using back force from gas
  • Calculate new sink potential using advanced particle positions
  • Calculate new sink potential using predicted particle positions
  • Update gas potential at coarse fine boundaries using phi, phidot, and phi sinks
  • calculate new gas potential (poisson solve) and ghost
  • momentum and energy flux correction
  • Update total potential and time deriv using sink potential and gas potential.
  • After finer level steps, particle positions and masses will be different. So update phisinks as well as phigas keeping phi constant.
  • Repeat

Of course the calculation of the predicted particle positions on each level (other than the maxlevel) can be as sophisticated as necessary. One could use the original particle positions and velocities alone, or could advance the level's own version of the particles using the same advance algorithm as well as gas forces etc… Note this is necessary if one wants to thread these advances, since the particle masses may change due to accretion in the middle of a coarse level update. But this threading would still require use of the old time derivative to update the ghost zones instead of the forward time derivative.

Added 1D functionality to code

The code can now run 1D AMR problems and produce output to chombo. Just set nDim = 1. The chombo files will look funny with AMR turned on, but that is just because chombo has to believe they are 2D AMR data sets. Because the data is in fact 1D, it thinks that some data is missing, and leaves these areas white.

You an do line-outs along the y=0 boundary to generate curves.

Here I've plotted the data on levels 0 and 1 (redline) with data on levels 0-4 (blueline).

(Also have not tested self gravity or other source terms, or several of the 'objects', but most will need very minor modification to work).

It is not checked in, but interested folks can pull from alfalfadata/johannjc/scrambler_1Dworks

Non-equilibrium Cooling

The non-equilibrium cooling routines appear to be working. I made a new page where I will document things. So far this page only has some initial plots to look at for the 1/6/12 conference call. u/neqcooling