Posts for the month of May 2012

New Revision 946:ff6bdbea174a in the official repository

meeting update

I have been reading loads of great papers and trying to absorb all the equations, methods, etc. I am currently working on putting together a written summary of what I have been learning which will lead into the research project I have been pondering. I plan on working on that for tonight and maybe tomorrow. As I have posted, my figure for my BE collapse is not quite the same as the B&P run at later times. Qualitatively it looks correct. I am thinking about equations that would be relevant in determining a predicted core density for the collapse. This has not been clearly presented in the literature. After my summary, maybe tomorrow or next, I want to work through some calculations to let me better understand my B&P output. This is the first step I'd like to take— fully understanding the B&P setup I have run— both relevant collapse calculations and the numerics. Likely, this should have been the other way— equations first, sim second :)

In other news, I will be moving to BH ASAP to run calculations. I have been running a sim on bamboo for days now (sorry guys)to check the density I have gotten from my restarts (when it begins to deviate from B&P's results), just as a sanity check. I hear new revision is out so will try compiling it on BH tonight.

I'd like to calculate the expected time on my simulation to compare with how long it has been taking in bamboo. I think I remember there is a toy calculation somewhere on the wiki on how to do this..

Lastly, I think the B&P set up is slightly different than my run. Not outlined in the section I have been focusing (the isothermal collapse section), but in previous sections, B&P says all runs have angular velocity, and on top of 10% over density there is another density perturbation given by the azimuthal angle. I would like to a) determine an expected density for MY collapse setup and compare to my results and b) determine how the rotation would be expected to alter the density profiles I have found. This seems a cleaner way to compare my results, rather than re run the sim but with the rotation. .

Baowei's Meeting Update 05/29/12

  • Modifications to the official scrambler last week:

https://clover.pas.rochester.edu/trac/astrobear/log/Scrambler/?action=stop_on_copy&mode=stop_on_copy&rev=aac36d619caacf1eda6eb785046514dcc8c5e87c&stop_rev=916%3A47468f693d6f&limit=200

  • Created a folder /cloverdata/trac/astrobear/doc/talks/ for talks. Folks who gave talks can upload your talks to the folder or just send the file/link to me.

Meeting Update 05/29/2012 - Eddie

Not much to report this week. Last week, I spent the majority of my time wrestling with my radiative shock module and the Z cooling routines. I'm working on implementing a mu (mean molecular weight) that varies from cell to cell. Ionization decreases mu, recombination and the presence of heavier elements such as He increases mu. However, I'm not even messing with He yet. I have been able to set up an initial profile with a variable mu, but have not been able to keep the shocks steady. I'm not exactly sure what is going on, so I'll have to keep working on it.

In other news, I closed on my house last Friday, so I've been extremely busy moving, cleaning, fixing things, etc. Turns out that buying/owning a house is a ton of work, but the "16-hour-manual-labor-work-days" will be over soon.

New Revision 936:aac36d619caa checked in

Martin's update, 5/29 '12

-Biaries. Writing the setup and results section of the paper. 10au (http://www.pas.rochester.edu/~martinhe/2011/binary/4panels-10may12B.gif) and 20au (http://www.pas.rochester.edu/~martinhe/2011/binary/4panels-14may12a.gif) sims continue to run, time > 2 orbits, in Bhive. 20au runs with 6 time more res (as discussed last friday) are in the queues of kraken and steele, both of which are quite busy.

-Magnetic tower. Finished hedla proceedings and waiting for co-authors comments. Doing the ApJ's referee's corrections.

-AGN jet truncation. Have a reservation in Bgene today to figure out why the code is not working in that cluster. I'll then continue with the AGN runs (which I was doing in Bhive but I'm now using it for the binary runs.).

Made a page describing AstroBEAR's super-standard out..

new plots

Radial velocity matches B&P, however density is off by a factor of ~4 at later times… Am checking my calculations will post follow up soon..

http://www.pas.rochester.edu/~erica/Screenshot-5

http://www.pas.rochester.edu/~erica/Screenshot-6

My HEDLA2012 talk

You can find the PDF here: https://docs.google.com/open?id=0B8DQyRQjxuI8Y3piOXgweTd5QVk

Do we have a space for talks on the wiki?

Awesome pic on home page

Image of the day looks killer. What is the sim of? It looks like a tornado.

comparing my rad shock with Pat's

I translated my data so that it starts at the shock interface just like Pat's data. Also, I believe Pat used a small magnetic field, so I added that as well. The field is weak, so it doesn't really do much. These runs are supposed to be using Z cooling which includes Pat's tables. However, they don't look much different from the previous NEQ runs.

red is my run, black is Pat's…

Temperature looks okay, cooling length is close, Pat's postshock temperature is higher for some reason. This could be due to a different mean molecular weight. In the strong shock limit, postshock temperature is proportional to mean molecular weight. So if Pat's run has some helium or other species which make the mean molecular weight closer to 1.22 instead of 1.0 then that would explain this discrepancy.

Ionization Fraction looks good. The initial increase in ionization agrees very well. The fact that my data does not get as high as Pat's can be explained by the difference in postshock temperature.

Our simulations differ mostly in the density plot. I think this is just a code thing. They both agree very well in regards to the postshock value (almost 4 times preshock which makes sense). My code stops compressing when it stops cooling, and cooling stops once the temperature goes back to the ambient/preshock temperature. Pat's run keeps cooling past the ambient/preshock temperature, so it keeps compressing.

Meeting Update 05/22/2012 - Eddie

Took me a little bit longer than I expected to sort through Pat's data and plot it nicely in Visit with my data. I can compare density, temperature, and ionization fraction. So far I've only looked at temperature because it doesn't look quite right. They are certainly within the same ballpark though…

red is my simulation, black is Pat's data

Meeting /0522/

Working on the clump paper. Sections 2,3,4 draft finished:

pdf link

Now need to work on the introduction and discussion section. Appendix?

two more runs for this paper are submitted (expected to run this week):

toroidal only with beta_avg = 1.

These can be used to examine the effect of different beta_avg and can be compared to the poloidal only runs because they have the same beta_avg.

More runs on the magnetized wind case:

http://www.pas.rochester.edu/~shuleli/HedlaLines/lowwind2.png

Been working on the diffusion hypre solver. Took away the anisotropic part(left to the explicit solver) to make the code simpler. Implemented the conduction front test which has a self similar solution to any diffusion index. Now trying to use the conduction front test to check if the solver is quantitatively correct and the AMR part of the code (encountering NaN issues).

Impact parameter for companion disk

If we modify simple BH accretion about a fixed object by applying an acceleration that is perpendicular to the direction of the flow - then it is reasonable to expect the 'backflow' to be aimed towards a position somewhere in between the object's original position and current position because the material that is captured has been accelerated on average towards a retarted position. If we take the time scale for this capture as RBondi/vwind we can calculate a distance the object will have moved due to acceleration as . If the acceleration is not perpendicular to the flow but at some angle then it is the projection of the acceleration perpendicular to the flow that matters…

If we now imagine that we are in an intertial frame of reference that is comoving with the secondary's velocity at t0 - we have the secondary accelerating towards the primary at and the projection relative to the incoming flow velocity as just . Given that acceleration of captured material occurs over a time scale we get that the offset should go as and should be radially outwards. This would imply that the backflow would return outside of the secondary and would then proceed into a prograde orbit about the secondary.

Meeting Update

  • Created a work sheet for calculating various run parameters…
X Linear size of entire base grid
N Number of cpu's
D Physical dimension of problem
C Cell updates per cpu per second
T Run time
x Linear size of base grid on each cpu
L number of levels or refinement
R Refinement ratio
F Filling ratios between AMR levels (assumed to be fixed across all levels)
E Ratio of workloads between levels (E=F R(D+1))
  • In general for a given linear resolution X, there will be X cell updates per crossing time - and the number of cells to update will be XD so the total work load for a problem goes as X(D+1). If a single cpu can update C cells/second, then we have C N T = XD+1
  • Now if we divide a domain that is XD into N pieces to distribute, then each piece will have XD/N cells and have a linear dimension x = (XD/N)1/D = X/N1/D

  • Now with weak scaling the number of cells per cpu 'x' is kept constant - so we have XD ~ N. If we were actually doing simulations then the walltime would go as T=(XD+1)/N C ~ X ~ N1/D so the 1,000 core simulation would take 10 times as long (assuming 3D) because there would be 10 times as many time steps though each time step would take the same amount of time since x is constant
  • For strong scaling the goal is to keep X constant so x = X/N(1/D) ~ 1/N(1/D). The number of time steps is unchanged but the time per step goes as xD ~ 1/N so the over all wall time T ~ 1/N. (Of course the memory requirements also go as 1/N so a strong scaling run would take 1000x as much memory on 1 core as 1000 cores.
  • For hybrid scaling the goal is to keep the wall time constant since this is what effectively sets the resolutions we run at. So X(D+1)/N is kept constant so XD+1 ~ N and x ~ X/N1/D ~ N1/D+1/N1/D ~ N1/[D(D+1)] or xD ~ N1/(D+1) which is a fairly weak scaling - so hybrid scaling is very similar to weak scaling - but there is a slight decrease in the workload per cpu because in general more cpus → more time steps → shorter time per step.
  • With hybrid scaling - the invariant is the wall-time which can be chosen intelligently to be 1 day or 1 week … But with strong or weak scaling we have to motivate a choice for x (in the case of weak) or X (in the case of strong). The best way to do this it so choose a target number of cpus and a wall-time. Then you can back out what X and x is for that number of cpus and that wall-time and use those values for the strong and weak scaling respectively.
  • Finally with AMR - if we have a base grid with linear size X, then there will be F XD cells marked for refinement - each of which will create RD child cells that will need to take R substeps - so for each coarse level step there will be F XD RD+1 level 1 steps and since the whole simulation will consist of X coarse steps, this will be a total of F XD+1 RD+1 level 1 steps. And for each level 1 step there will be (F XD RD+1)2 level 2 steps… So for the entire simulation there will be XD+1 (1+F RD+1 + (F RD+1)2 + … + (F RD+1)L) cell updates = XD+1 (1-EL)/(1-E) where E = F RD+1 So if we want to keep the wall-time constant as we add levels of AMR then we need to keep XD+1(1-EL)/(1-E) a constant - so X ~ [(1-E)/(1-EL)]1/(D+1)
  • And we have the 'master equation' N C T = XD+1(1-EL)/(1-E)
  • So in summary:
Weak Scaling fixed #cells per processor
Strong Scaling fixed #cells
Hybrid Scaling fixed wall-time
Hybrid-AMR Scaling fixed wall-time ( where is the filling fraction and is the depth )
  • There is also the issue of grid sizes causing a slow down in computation. In general the overhead required to update a grid of size X reduces the efficiency to XD/(X+M)D where M is of order 4 or 5… If the hyperbolic advance uses up G zones then the extended ghost zone advance will reduce the efficiency to 2 XD/((X+2G+M)D+(X+M)D)… For X of 10 and G of 4 and M of 5 this gives 2000/(233+153)=13% efficiency - without taking into account any latency or communication bottlenecks…. This is just do to having small grids of size 103 - combined with extended ghost zones… If the grids are 203 this becomes 33%… But smaller grids means a lot more overhead - especially as the linear size of a typical grid approaches 10-20 cells
  • Fixed a few threading bugs…
  • Started working on thesis…

Meeting Update 05/15/2012 - Baowei

  • Working on scaling test on Ranger Ticket #193.

Meeting Update 05/15/2012 - Eddie

Nonequilibrium cooling (NEQ) aka Modified DM is working with my radiative shock module. This type of cooling is more efficient than regular DM, so the problem had to be physically scaled down a little bit in order to see the cooling region.

Here is the expected temperature profile NEQ_temp.gif

Here are the profiles for various densities (total, H, and HII): NEQ_rhos.gif

The problem was that I didn't realize that the source term modules currently use q in primitive form. The ionization routines assume q is in conservative form. So all I had to do was a few little conversions and everything worked. The routines will need some more modification and testing to use other available species: H2, He, HeII, HeIII. For now I'll just stick with H and HII, and I'll move onto Z cooling to see what problems that creates.

UPDATE

I adjusted how ionization/recombination is handled, so that when cooling turns off the species densities can still change. This leads to more physically accurate results: NEQ_rhos1.gif

Update/0515/

Working on the paper.outline:
1.introduction
2.physics description, including calculation of time scales
3.simulation setup
4.results
5.discussion

have been working on parts 3 and 4 mostly. Also wrote up some of part 2. The time scale part needs more time though.

Meanwhile, finished several runs with magnetized wind simulations: the magnetic energy density is 0.02, 0.1, 0.5 times the peak magnetic energy density inside the clump.
Some results:

http://www.pas.rochester.edu/~shuleli/HedlaLines/highwind.png
http://www.pas.rochester.edu/~shuleli/HedlaLines/lowwind.png
http://www.pas.rochester.edu/~shuleli/HedlaLines/hydrowind.png
http://www.pas.rochester.edu/~shuleli/HedlaLines/highwindtr.png
http://www.pas.rochester.edu/~shuleli/HedlaLines/lowwindtr.png

weakly magnetized wind with magnetized clump, movie

CIRC Poster Session

I hope everyone had a great time at the CIRC Poster Session last Friday. Here are some photos for our group. Thanks for being there.

http://www.pas.rochester.edu/~bliu/PosterSession/AdamGroup/P1010013.JPGhttp://www.pas.rochester.edu/~bliu/PosterSession/AdamGroup/P1010014.JPG
http://www.pas.rochester.edu/~bliu/PosterSession/AdamGroup/P1010016.JPGhttp://www.pas.rochester.edu/~bliu/PosterSession/AdamGroup/P1010024.JPG
http://www.pas.rochester.edu/~bliu/PosterSession/AdamGroup/P1010025.JPGhttp://www.pas.rochester.edu/~bliu/PosterSession/AdamGroup/P1010031.JPG
http://www.pas.rochester.edu/~bliu/PosterSession/AdamGroup/P1010032.JPGhttp://www.pas.rochester.edu/~bliu/PosterSession/AdamGroup/P1010046.JPG
http://www.pas.rochester.edu/~bliu/PosterSession/AdamGroup/P1010047.JPGhttp://www.pas.rochester.edu/~bliu/PosterSession/AdamGroup/P1010049.JPG
http://www.pas.rochester.edu/~bliu/PosterSession/AdamGroup/P1010050.JPGhttp://www.pas.rochester.edu/~bliu/PosterSession/AdamGroup/P1010051.JPG
http://www.pas.rochester.edu/~bliu/PosterSession/AdamGroup/P1010064.JPGhttp://www.pas.rochester.edu/~bliu/PosterSession/AdamGroup/P1010066.JPG
http://www.pas.rochester.edu/~bliu/PosterSession/AdamGroup/P1010070.JPG

Co-rotating binaries disk formation sims No .1

6 June '12, 10 AU case, Mdotprimary=10-5Mo yr-1

http://www.pas.rochester.edu/~martinhe/2011/binary/6jun12.png Disk mass evolution. Compare with pk model 2, fig. 1 top, but note that qpk=3 while qus=1.5. I'll soon have a similar plot for the 20 AU case which should be compared with M&M figure 5 bottom; M&M see an increasing disk mass with a final value of 5x10-6 Mo yr-1.
http://www.pas.rochester.edu/~martinhe/2011/binary/7jun12.png Evolution of the mass accretion rate onto the particle. The converging value of ~5x10-6 seems a factor of 50 higher than expected. Comparing with table 3 M&M, models 1 and 2: MdotM&M~.9-3x10-6 ; Mdotus~5x10-6, so we seem to be off for a factor of order 1.6-5.6. Comparing with pk model 2, fig. 1 bottom, (but note that qpk=3 while qus=1.5) we are off by a factor of ~30. Investigating further.

-20au preliminary early evo test with more resolution than before, http://www.pas.rochester.edu/~martinhe/2011/binary/20au-1000k-64x64x32-4amr-correctWINDdens.gif. This sim has a resolution of 64x64x32+3amr. AGB wind inflow enters the -x, +y and +-z boundaries. Improved versions of this sim are running in ranger and queued in kraken.


authors q=m1/m2 a [AU] tempw [k] velw [km/s] total run time [yr] resolution
us 1.5/1=1.5 10 1000 10 40 (2 orb) soon
us 1.5/1=1.5 20 1000 10 57 (2 orb) soon
us 1.5/1=1.5 30 1000 10 104 (2 orb) soon
us 1.5/1=1.5 40 1000 10 160 (2 orb) soon
pk 3/1=3 3 soon 10-20 104-6 soon
pk 3/1=3 10 soon 10-20 104-6 soon
pk 3/1=3 30 soon 10-20 104-6 soon
pk 3/1=3 100 soon 10-20 104-6 soon
pk 1.8/.6=3 3 soon 10-20 104-6 soon
pk 1.8/.6=3 10 soon 10-20 104-6 soon
pk 1.8/.6=3 30 soon 10-20 104-6 soon
pk 1.8/.6=3 100 soon 10-20 104-6 soon
vb 1.2/.6=2 10-100

  • velw=10km/s
  • tempw=1000K
  • mprim=1 Mo
  • msec = .5 Mo
  • resolution= 633 cells + 2 particle refinements
  • rBondi= G msec / (velw2 + velorb-second2)
  • lgrid = 2.5rBondi
  • rsoft = 4 cells

Inflow form the top, bottom, lower Y and larger X is killed.

date separation [AU] res rmax-ref-region rBondi [cells] rBondi/rsoft velorbit-sec [km/s] run disk moviesa
14may (7 orbits) 10 643 + 2amr rBondi/4 100 25 10. ~11hr/orbit, 32p, Bhive y http://www.pas.rochester.edu/~martinhe/2011/binary/4panels-10may12B.gif
19may 10 643 0 100 25 10. ~5hr/16p, ranger y http://www.pas.rochester.edu/~martinhe/2011/binary/4panels-20may12.gif
18may 10 643 + 2amr rBondi/2 100 25 10. ~52.5hr/64p, ranger y http://www.pas.rochester.edu/~martinhe/2011/binary/4panels-18may12.gif
14may (6 orbits) 20 643 + 2amr rBondi/4 100 25 6.4 ~17hr/orbit, 40p, Bhive y http://www.pas.rochester.edu/~martinhe/2011/binary/4panels-14may12a.gif
14may 30 643 + 2amr rBondi/4 100 25 5.2 ~42.5hr/128p, ranger (long q) small? http://www.pas.rochester.edu/~martinhe/2011/binary/4panels-14may12b.gif
18may 30 1283 + 2amr rBondi/2 100 25 5.2 running (1.6 orbits) 64p Bhive http://www.pas.rochester.edu/~martinhe/2011/binary/4panels-21may12.gif
20may (stand by) 30 1283 + 2amr rBondi/4 100 25 5.2 running INTERMITTENTLYc since 2pm, 1536proc, kraken
20may (stand by) 30 643 + 3amr rBondi/4 100 25 5.2 running INTERMITTENTLYc since noon, 1536proc, kraken
14may 40 643 + 2amr rBondi/4 100 25 4.4 ~48hr/128p, ranger (long q) n http://www.pas.rochester.edu/~martinhe/2011/binary/4panels-14may12c.gif
23may (stand by) 40_2D 128x128x2 + 3amr rBondi/8 100 25 4.4 …/32p Bhive (q @ krakrn, Bgene) ? soon

a No. density log grey scale [cu] + color velocity [Mach]. Top left, top right, bottom left and bottom right show the entire orbital plane, a zoom into the orbital plane, the entire longitudinal plane for a phi angle parallel to the wind at the origin, and a zoom into the longitudinal plane, respectively.

c https://clover.pas.rochester.edu/trac/astrobear/ticket/209#trac-add-comment

Meeting Update 05/08/2012 - Eddie

Sorry, not much to look at today, just a brief summary of what I'm working on:

  • Still working on getting ionization to work in my radiative shock module. I feel like I'm getting closer though. I'm no longer getting any compiler or runtime errors. The output just looks off. It may just have to do with a postshock jump condition for ionization? I will contact Pat on this.
  • I'm also working on a revision of the code that will take care of the BScale typo (#198) and protections for initialization (#171). The first ticket is trivial. The second ticket came back with an error when Baowei tried to run the test suite, so I'll take another look at that.
  • Lastly, I'm going to start running 3D jet simulations with different cooling strengths with Martin's CRL618 setup.

UPDATE

It appears that NEQ cooling (modified DM) yields much stronger cooling than just DM. I shrunk the problem domain by a factor of 40 to get a shock profile similar to the DM simulations. However, the shock is still not steady. The simulation seems to develop a forward and reverse shock. So something is obviously not right, I'll keep working on it.

Meeting update

  • Working on scaling tests #202
  • Started gravoturb sims on Kraken #168
  • Colliding flow restarts on Kraken #203
  • Closed tickets #191 (threading), #195 (gfortran), #196 (cameras), & #199 (visit lower bounds)
  • CollidingFlows movies

Update/0507/

Some other results that were intended for HEDLA.
Started running the magnetized wind case.

http://www.pas.rochester.edu/~shuleli/HedlaLines/MCclumppercentage.png

http://www.pas.rochester.edu/~shuleli/HedlaLines/MCcurrent.png

http://www.pas.rochester.edu/~shuleli/HedlaLines/beta_cut_zoom.png

http://www.pas.rochester.edu/~shuleli/HedlaLines/jet_cross_cut.png

http://www.pas.rochester.edu/~shuleli/HedlaLines/vphicompre.png

http://www.pas.rochester.edu/~shuleli/HedlaLines/finalcurrent.png

http://www.pas.rochester.edu/~shuleli/HedlaLines/enstrophy_ordered.png

current movie
awesome clump.gif

Baowei's Meeting Update 05/08/12

http://www.pas.rochester.edu/~bliu/Scaling/ranger.png

http://www.pas.rochester.edu/~bliu/Scaling/rangerSpeedup.png

  • Worked on runtime errors happened when testing Revision 894: Tickect #200. Checked in Revision 894 for new subgrid-generating: Ticket #183, #185.
  • Working on the unique TAGs for messaging stages to allow Max_levels be larger than 10: Ticket #71, Ticket #192

Martin's update, 5/8 '12

Co-rotating binary sims: https://clover.pas.rochester.edu/trac/astrobear/blog/martinhe04232012

PN paper re-submitted to mnras after addressing the referee's comments. Eric has seen it too. Here's the new paper: http://www.pas.rochester.edu/~martinhe/paper.pdf

AGN jet truncation. MHD tests going well. Found parameters for a PFD jet and I'm running the last tests on it. Already seen that the RG wind creates a hydro island round the jet due to flux freezing.

CRL618. Reading some references sent by Bruce, which may be included in the paper. He's preparing a draft.

PFD vs MCL jet. The long adiabatic mag tower simulation is in the queue at bgene.

CIRC poster session this Friday, presenting the mag tower.

Remapping from Turbulence sims to 3D Cloud collapse sims

So I have two data sets from isotropic turbulence (isothermal and iicool) that consist of the density, velocity, (and energy for the IICool run) on a 5123 mesh that I am remapping to a the middle half of a level 3 region of a simulation with a 10243 effective resolution…



Both should have been driven to the same mach number though the isothermal simulation has a higher temperature and XMU (20 K and 1.4 and a gamma of 1.0) where the IICool simulation has an equilibrium temp of 16.98 and an XMU of 1.27 and a gamma of 1.6666

See blog:johannjc03062012b

The Isothermal run can be rescaled in density, length, or time - to a different density and/or temperature and/or size as long as the mach number is constant. This leaves us with two free parameters. We want a virialized cloud, and we want the peak density to be within the Jeans length at the 512 resolution…?

The current peak density is 40622 which is to be expected for mach 21.5 turbulence with a background density of 100? 100*21.52=46225

However at a temperature of 10 this gives a jeans length of only 0.192981859204160 pc. If we want to resolve everything by a jeans length of 32 cells… then this requires each cell be .012 pc or the cloud at 5122 resolution - be only 6 pc across.. We could lower the density, or raise the temperature …

So if we have a virialized cloud, the Jeans length at the mean density will be and density enhancements of M2 will have Jeans lengths of order which means we will need a resolution of which allows for Mach 4 turbulence…

Options:

  • Generate lower mach number turbulence and make region larger and less dense?
  • Begin simulation with higher resolution… But it would be nice to get a chombo file complete with a solution for the potential to start with…

It would be nice to start with turbulence that wouldn't be strongly gravitating - at least locally … but would still be virialized… With IICool this should be possible - since virial equilibrium could be predominantly thermal support with lower mach number flows - that would trigger thermal instabilities and collapse would ensue…