Doomsday statistics worked out...
Assume there is some rate for civilizations to be born A, and each civilization has some chance D to die before a time B at which point they go extra-planetary… Then at any given time T there would have been A*T civilizations born, A*(T-B)*(1-D) extra-planetary civilizations, and at most A*B earth-like civilizations.
Extra planetary | earth like | Dead | Total |
A*(T-B)*(1-D) | < A*B | > A*(T-B)*D | A*T |
If we assume a constant birth rate for all civilizations then the odds that you are born into an earth-like civilization instead of an extra planetary civilization is at best p = B/(T-(T-B)*D) or a lower bound on D = (pT - B)/(pT-pB)
We can guess D and calculate how lucky we are to be in a young civilization, or we can guess p and estimate how unlucky we probably will be.
If B = 1e4 yr, and t = 1e10 yr and…
If d = 0 then we are 1 in 1,000,000 and if d = .5 then we are 1 in 500,000
On the other hand if we assume that p = .5 then our odds of surviving would be 1 in 500,000
Of course we have not used the fact that our civilization has already been around for Y years … Or how close are civilization is to going extra-planetary or how close Y is to B - and it does not appear the authors of the paper considered this either - but as it turns out it can be fairly important if the existential threats are similar to planet killing asteroids that are likely to strike at any time.
Let's assume that lifetimes of civilizations follow some exponential distribution (a*exp(-a(t-t0)) for t < t0+B
Furthermore if (1-D) fraction of civilizations survive for B years, and the lifetime of civilizations that live less than B follows an exponential distribution, then (1-exp(-lambda B)) = D which gives lambda = -ln(1-D)/B. So given that our civilization is Y years old, the odds that we are destroyed before B is (exp(-lambda Y)) - (exp(-lambda B)) = (1-D)(Y/B) - (1-D)
When Y = 0, this gives D consistent with the paper… but obviously when Y = B, this gives 0 as we would be in the process of colonizing other planets…
If we expect to be extra planetary in 100 yrs, then Y = .99 B and if we assume p = .5 which gives D = (1-2e-6) then this gives our odds of us not going extra-planetary at only 2e-7! Which is much better odds than (1-2e-6)…
Of course the value used for D assumed that civilizations that die live for B years, and not the average value from the exponential. We could calculate the average as a function of lambda, and then use p to get lambda, and then use lambda to get D… But given that we are close to B, means that it must happen with some frequency - which means that most civilizations are long lived, and we must just be lucky to have found ourselves in a pre-extra planetary civilization
Thinking about jars of numbered jelly beans
Let's assume we have a jar of numbered jelly beans of unknown size… And we randomly draw a jelly bean with a 1 on it… Does this tell us anything about the jar of jelly beans? Well - we now know the jar had at least 1 jelly bean with absolute certainty… But what are the odds the jar only had 2 jelly beans to choose from - or 3, 4, etc… Or if we were someone gave us even odds on their being more than X jelly beans - should we take it?
Intuitively, we might expect that the odds that there are more than 1,000 jelly beans are slim at best… To actually calculate the odds, we need to assume some distribution for the possible number of jelly beans in the jar… Let's assume that any number of jelly beans is equally likely up to some limit L. Then for each L', the odds of us drawing a jelly bean with the number 1, are just 1/L'. Now to determine the new expected distribution of jelly beans, we can weight each expectation by the relative odds of drawing a 1 compared to the overall odds of drawing a 1. The overall odds are just 1 + ½ + 1/3 + ¼ + 1/5 + … 1/L = HL where H is the harmonic series. The odds of there being L' Jelly beans were just 1/L, but the odds of drawing a 1 with L' jelly beans is 1/L', so the new odds of there being L' jelly beans is 1/L' / HL instead of 1/L. So we can ask ourselves what are the odds of there being only 1 jelly bean? If we thought there could be somewhere between 1 and 100 jelly beans, then the odds are just 1/H100 ~ 20%. The odds that there are either 1 or 2 jelly beans would then be (1/1 + ½) / H100 = H2/H100 ~30 and the odds are 50/50 that there are fewer than 7 Jelly beans…
Now if there could be N jelly beans with equal probability, and we drew a jelly bean numbered 1, what would be the upper limit of beans that gave us 50/50 odds? This is just Hx/HN = .5 Since the harmonic series is sum(1/n) can be approximated by integral(1/x) = ln(x), the harmonic series asymptotes to ln(N), and we have ln(x)/ln(N)=.5 or x ~ sqrt(N). So for L=100, we would estimate 10 beans would be a sufficient upper limit… (It was 7)… But for 10,000 beans, even odds say there are no more than 100 beans… But we can make no useful predictions without having some upper limit on the possible number of jelly beans… We can also ask what the average number of jelly beans we should expect is - as opposed to the 50/50 cutoff… This would just be sum (L' 1/L')/HL = L/HL which assymptotes to L/ln(L) …
What about exponential distibutions instead of linear
So now if the jelly bean jar is flared, and divided into L vertical sections and there are exp(z) jelly beans per section. and each jelly bean is labeled by it's vertical section. Then the odds of drawing a jelly bean from section z' is just exp(z') …
Strong Scaling Test on Stampede
- Run Time
- With hypre
Num of Cores | Run Time (secs) |
1024 | 7963.7 |
2048 | 5862.3 |
4096 | 4005.8 |
2.Without hypre
Num of Cores Run Time (secs) 1024 7401.0 2048 5436.6 4096 4126.2/4025.2
- Scaling Test Result
Runtime | |
Runtime Considering Efficiency | |
Cell Updates Per Second |
- Standard output of last advance
- 1024 cores:
Info allocations = 79.8 gb 110.0 mb message allocations = ------ 32.0 mb sweep allocations = ------ 29.9 mb filling fractions = 0.017 0.597 0.855 0.000 Current efficiency = 66% 31% 97% Cell updates/second = 437 1215 36% Wall Time Remaining = ------ AMR Speed-Up Factor = 0.3331E+03
- 2048 cores
Info allocations = 106.9 gb 85.5 mb message allocations = ------ 64.0 mb sweep allocations = ------ 30.7 mb filling fractions = 0.017 0.591 0.848 0.000 Current efficiency = 58% 39% 97% Cell updates/second = 298 1011 29% Wall Time Remaining = ------ AMR Speed-Up Factor = 0.2305E+03
- 4096 cores
Info allocations = 147.4 gb 61.2 mb message allocations = ------ 128.0 mb sweep allocations = ------ 20.1 mb filling fractions = 0.016 0.619 0.846 0.000 Current efficiency = 47% 50% 97% Cell updates/second = 187 785 24% Wall Time Remaining = ------ AMR Speed-Up Factor = 0.1753E+03
- 1024 cores:
- Standard output of last advance (No self-gravity)
- 1024
Info allocations = 67.0 gb 93.0 mb message allocations = ------ 32.0 mb sweep allocations = ------ 25.2 mb filling fractions = 0.017 0.597 0.852 0.000 Current efficiency = 69% Cell updates/second = 466 1299 36% Wall Time Remaining = ------ AMR Speed-Up Factor = 0.4099E+03
- 1024
- 2048
Info allocations = 92.0 gb 71.2 mb message allocations = ------ 64.0 mb sweep allocations = ------ 22.5 mb filling fractions = 0.016 0.620 0.848 0.000 Current efficiency = 61% Cell updates/second = 322 1097 29% Wall Time Remaining = ------ AMR Speed-Up Factor = 0.2873E+03
- 4096
Info allocations = 125.3 gb 51.0 mb message allocations = ------ 128.0 mb sweep allocations = ------ 19.9 mb filling fractions = 0.016 0.616 0.849 0.000 Current efficiency = 50% Cell updates/second = 206 865 24% Wall Time Remaining = ------ AMR Speed-Up Factor = 0.1884E+03
Info allocations = 125.6 gb 54.7 mb message allocations = ------ 128.0 mb sweep allocations = ------ 19.9 mb filling fractions = 0.016 0.620 0.845 0.000 Current efficiency = 51% Cell updates/second = 211 882 24% Wall Time Remaining = ------ AMR Speed-Up Factor = 0.1919E+03
- CPU hours
- 1 frame: 4600 SUs
- 50 frames: 230,000 SUs
- 4~5 runs: 1,150,000 SUs (on stampede)
- Current Allocation: 416,000 SUs (on stampede), 1,138,234 SUs (on Kraken)
Outside-In collapse not unique to "Bonnor Ebert Spheres"
Uniform spheres also collapse outside in —
From Larson's 1969 Paper on similarity solution for uniform sphere:
In this figure, we can see the same density profile we see from my perturbed BE sphere developing, but from a uniformly dense IC in this case… The collapse is highly non-homologous overall, inside regions are collapsing but outer regions aren't… Put another way, only the interior core region is collapsing homologously… What types of collapse are homologous I wonder..?
Curious to check rho and vrad in our code, it seems that indeed outside-in collapse occurs for overdense uniform spheres as well:
From our code —
RHO:
VRAD:
Huh, this is interesting… I am sure this has to do with the fact that Whitworth's solution encompasses all collapse — from LP (uniform) to Shu (singular) to Hunter (critical BE spheres)…
Thoughts on Scales
Currently in the code we have 4 different types of 'scale' variables:
- Computational scales which obey expected relationships (VelScale=lScale / TimeScale)
- lScale - length scale
- mScale - mass scale
- rScale - density (rho) scale
- VelScale - velocity scale
- TimeScale - time scale
- BScale - magnetic field scale (Gauss)
- pScale - energy density (pressure) scale
- Computationally scaled constants
- ScaleGrav - G in computational units
- ScaleC - speed of light in computational units
- Microphysical scalings
- TempScale = pScale/rScale * kB/ XMU
- nScale = rScale / XMU
- Inverse computational scales used in source routines
- eDotScale = 1d0 / (pScale/TimeScale)
- And then inverse scales that involve microphysical scalings
- ScaleCool = 1d0 / (pScale/TimeScale/nScale2)
- ChemScale = 1d0 / (nScale/TimeScale)
A computational length, density, pressure, velocity etc… derive meaning through the euler equations - however computational temperature, and computational number density are meaningless. Only the corresponding physical quantity has any meaning. This is why the microphysical scalings are not consistent with the other scales (nScale /= lScale-3) but instead are designed to give quick ways of getting physical values of Temperature and number density from computational mass density and pressure (provided XMU is the mean molecular weight mmw)
- np = rhoc* nScale → nScale = np/rhoc=np/rhop*rScale = XMU*rScale
- Tp = Pc/rhoc * TempScale → TempScale = rhoc * Tp/Pc=pScale*nP*XMU/rScale * Tp/np*kB*Tp = pScale*XMU/rScale/kB = pScale/nScale/kB
The problem is further complicated when XMU can vary (as occurs with multi-species gas) Then the deviation of the local value (xmu) from some fiducial value (XMU) must be added to calculate the actual number density or Temperature
- np = rhoc*nScale*XMU/xmu
- Tp = Pc/rhoc * TempScale * xmu/XMU = Pc/rhoc * mTempScale
And now nScale can have meaning if we store the computational number density in tracer fields - though it is still not consistent with lscale-3
- np = nScale * nc
And if we are tracking computational number density, there is a consistency relation between computational mass density and computational number density:
- rhoc = np / nScale / XMU * xmu = nc * xmu/XMU
Which allows us to write an expression for a modified TempScale
- mTempScale=TempScale*xmu/XMU = TempScale*rhoc/nc
In any event I vote we
- leave the true computational scales, as well as the microphysical scales (nScale and TempScale) as is
- rename the inverse scales to have the Scale in front (ScaleCool, !ScaleEDot, ScaleChem)
- and then to avoid confusion change ScaleGrav to ScaledGrav and !ScaleC to !ScaledC
Meeting Update
Finished the documentation on FluxLimitedDiffusion in the Developer's Guide
New movie of Magnetized Colliding Flows
Meeting update
Last week I spent some time reading up on polytropes — both theory and an approximate analytic solver for Erini's code. I also worked on debugging my sink particle, although haven't posted updates on that. Mostly I am going to put that aside until the paper is finished, as it is extra material for that work. Will take the next week to read materials for my paper and begin coding Godunov for Euler equations.
Meeting Update 03/25/2013 -- Baowei
- Tickets
- New: none
- closed: #273(Install AstroBEAR on Stampede of TACC)
- Machines:
- Xsede current usage: stampede 25% (315,584 SUs left), kraken 0% (1,138,234 SUs left)
- Disk Usage
- Raid alert from Clover: Rich's working on it
- Simple speed test on blue streak, blue hive and stampede (#7 of Top500) and hypre 2.8/2.9
Module | Blue Streak(16 cores) | Blue hive(8 cores) | Stampede(16 cores) |
---|---|---|---|
BonnorEbertSphere | 794 secs | 334 secs | 545 secs |
Bondi | 899 secs | 107 secs | 626 secs |
MolecularCloudFormation | 392 secs | 48 secs | 269 secs |
Didn't see big difference between hypre 2.8 and 2.9 on blue streak
- New wiki header picture
- Working on scaling test on stampede (with MolecularCloudFormation)
- Will take a day off on Tuesday for moving.
Erini's Update 3/25-2012
-met with Erica, and Professor Blackman, and Jonathon
- coding numerically the lane-emden equation as opposed to the approximate analytic paper I initially started with
- Jonathon led me through using various read-in profiles, which I will ultimately need to do with lane emden module
- option of using stellar evolution code data from public tables
Column density
Mach 1.5:
Mach 1.5 Movie
Mach 3.6:
Mach 3.6 Movie
Mach 3.6 with beta = 4:
Mach 3.6 with beta = 4 Movie
Magnetized TSF
Shock triggered star formation comparison: Mach 1.5 vs Mach 3.61
Here's a comparison between Mach 1.5 shock triggered star formation (left column) and Mach 3.61 triggered star formation (right column) The time slices taken are 1 cloud crushing time, 1.2 cloud crushing time and 2.36 cloud crushing time respectively (1 cloud crushing time = transmitted shock travels halfway through the cloud).
The Mach 3.61 shock case seems to be compressed faster due to the acceleration effect on compression by gravity (possiblity: compressed by a faster shock → stronger gravitational drag on the uncompressed material by the accretion core → accelerated compression). Both of the cases seem to demonstrate some sort of angular momentum behavior surrounding the core, which is perhaps something to explore later. The movies:
Mach1.5
Mach3.6
Fixed temperature
Found a place in the code that is not setting the correct ambient temperature, which in turn changes the wind speed. So after rerunning the Mach 3.6 shock (corresponding to 6.7 km/s) last night on stampede, here's what I found:
The velocity has the particle velocity subtracted. The Mach 1.5 shock can now get star formation too, as expected. Movies tomorrow.
Volume rendered cloud. Formed sink particle presented by the sphere at the center.
The compressed cloud exhibits a pancake shape. The star is formed at the center of the "pancake". The edge area of the "pancake" is constantly getting shredded by the wind. Material from outside the "pancake" is raining down into the "pancake", but central region of the "pancake" obtained an angular momentum, with the rotation axis aligned with the shock propagation direction. The rotational velocity V_phi is actually much greater than the infall velocity V_r.
Current Disk Usage
Machine | Total | Used | Use% |
---|---|---|---|
clover | 11T | 9.7T | 95% |
grass | 5.3T | 3.7T | 75% |
alfalfa | 5.3T | 4.9T | 97% |
bamboo | 13T | 12T | 97% |
Debugging sink hack
I am trying to debug my sink hack. I switched on the flags:
FFLAGS=-g -traceback -check bounds -check uninit -check pointers -ftz
When I run astrobear made with these flags, the code dies immediately saying the following:
When these flags are not turned on, and instead I use the original makefile line:
FFLAGS=-O3 -xP
The code runs but then dies a few frames after the sink starts accreting..
Meeting Update 0318
Finally I was able to run things on stampede. Here's a movie of triggered star formation with Mach 3.6 (10 times postshock density) on the Boss cloud. 16 zones per cloud radius, 3 levels of AMR. Takes about 6000 cpu hours to finish.
http://www.pas.rochester.edu/~shuleli/0318/tsf.gif
The grid looks like:
Tried 5 levels (effective 512 zones, same as Boss), takes too long to finish under current refinement criterion (6 hours per frame).
I'm currently working on testing the new implicit solver with modifications Jonathan and I did 2 weeks ago. The tested newer version should roll out sometime this week.
Meeting Update 03/18/2013 -- Baowei
- New users from Download form
- Helsinki, UR(student for CS course)
- A better of handling users using Download form?
- Equipment & Local machines
- Mouse, Mac to VGA port
- Clover becomes unstable: move wiki to botwin and use other machines for backup
- Grass has a lot of issues also. New machine for Erica (bamboo?)
- Tickets
- New: #280 (Strange message submitting to Bhive afrank queue)
- New movies on youtube channel: http://www.youtube.com/user/URAstroBEAR
- Worked on #273, testing hypre 2.9 and running script for testing suite on blue streak.
Update
- I added a sink to the grid and detailed the process in earlier blogs from last week… This was pretty straight forward after I figured out the process. The sink was successfully positioned at the origin at the start of the simulation, and became Jeans unstable at the same time a sink particle in the previous simulation was born. It began accreting then, but 3 frames past accretion the simulation encountered nan's and the code choked… I will look into this further today and tomorrow..
- I have been looking into additional resources for the paper more closely. I read Hennebelle again last week in more detail and wrote up a critique/description of it here: https://astrobear.pas.rochester.edu/trac/astrobear/wiki/u/EricasLibrary. I thought this would be a clear concise way to discuss papers for the paper with co-authors… I am close to writing the discussion.
- Here is our movie
A few questions: Riccardo Betti had to cancel our interview today for the AstroBEAR video. He is the last interview that I need, and he offered to reschedule for sometime next week. However, before I try to schedule a new time, I wanted to send you a cut of the video as it currently stands. Please take a look at the preview and keep in mind that I still need to add supportive shots of Martin working with the code, which will hide some of the jump cuts: Overall, is this what you were hoping for? Do you like the style and have I covered all of the important areas? I'm a little concerned that the current length is at 4:43. With the addition of Riccardo's footage, this will bump it over the 5 minute mark. Some audiences have trouble watching videos over 5 minutes. Thoughts? Do you still want Riccardo's voice in the video (to represent a user outside of the group), or is it good as is? This might push your public release date back slightly. Do you have any interesting videos representing a collapsing cloud that I can include during Adam's introduction? Can we make the bear more professional?
- I will be giving a talk at MCC in a couple of weeks on star formation and our research group.
- Coding project is likely to pick up again this week, I had to spend sometime last week revisiting some earlier chapters as I seemed to have forgot some characteristic properties of the PDE's, useful for approximation methods…
Meeting Update 03/18/2013 - Eddie
- made movie of emission maps for all cases of beta for the 2.5D pulsed jets (de Colle & Raga set up)
- merged Jonathan's new cooling routines with my revision. Sorted out a bunch of merge conflicts and got it to compile.
- updated my radiative shock module to track all hydrogen and helium species. Not getting a steady shock, but I think I've isolated the issue and I should have that fixed by tomorrow.
- ran a 2.5D jet with lab parameters from Francisco. jet radius = 1 mm, jet velocity = 100 km/s, runtime = 500 ns. The jet is an aluminum plasma, and the ambient is an argon gas. The cooling functions for these elements come from an old paper by Post et al., 1977. This simulation is simply using a cooling rate that adds the Al and Ar functions. I think I can improve this by adding tracers and using a density weighted function.
Erini's Update -3/18/13
Began coding the analytic approximate solution for the general case of the Lane Emden equation, I am using the solution derived here:F.K.Liu's Polytropic Gas Spheres: An Analytic Solution to the Lane Emden Equation(this is the same paper Erica used for her isothermal case, n= \(\infty\)).
Meeting update 3/17/2013
- Worked with Eddie on the multi-species EOS and the effective gamma. Need to distinguish between tracked and untracked species.
- Looked over Shule's implicit diffusion solver and read up on Mixed-Frame Flux-Limited Diffusion. I should have it working by weeks end. Also began documenting the solver in the developer's guide.
- Also came across some old documents dealing with Radiation Transport I had prepared 'back in the day'.. See the attachments
Martin's update, 13 mar '13
AGN jet truncation. After meeting with Eric and Alex Hubbard 2 weeks ago, we agreed on using the original magnetic tower setup to run scaled AGN sims. I've been working or resurrecting those files with the newest code revision.
Poloidal collimation. Phone call with Andrea. We discussed the dark feature at y~.4 in
http://www.pas.rochester.edu/~martinhe/andrea/synthetic.png
real or numerical? if real, transient? I will do another run to answer this questions.
ADAM: PLEASE ANSWER JULIN'S EMAIL ABOUT A TELECON.
Met with WIlliam to do some shots for the AstroBear video.
Adding a sink particle to the grid
The easiest solution to get what I am looking for (a sink at the origin that accretes material in a spherically symmetric manner after the center most cell satisfies the Fedderath conditions) is to hack astrobear rather than apply fixes and checks to the code for particles near reflecting boundary conditions. This is something I would be interested in pursuing, but due to time wanted to just get the ball rolling on getting a sink at the origin to check flow properties post formation..
Given the sparse documentation on the wiki to do this, I thought I'd better outline the procedure I learned about here, so it can be ported over to an actual wiki page later..
The hack I did has the following attributes:
- Initialize a fixed sink and associated point gravity object
- Kill any new checks for sinks so that as simulation progresses, no others will form (for the BE collapse case no other sinks SHOULD form anyway)
- Modify accretion so that new values of the mass is increased by factor of 8 to account for accretion from the reflecting ghost zones along x,y,z.
- Kill all motion from sink by removing particle momentum
As I outlined in previous blog posts on the wiki, amr_control.f90 calls various routines for checking for new sinks, initializing and advancing them, etc.
In particular the first pertinent routine it calls is particle_preupdate.. In this routine, there are loops that 1) check for new particles and 2) collects new particles. I commented both of these out and took some code from collects new particles and placed in my problem.f90.
Inside collects new particles, there are the following lines:
NULLIFY(NewParticle) CALL CreateParticle(NewParticle) NewParticle%xloc(1:nDim)=GlobalParticles(1:nDim,i) NewParticle%moments=GlobalParticles(nDim+1:,i) NewParticle%iAccrete=FEDERRATH_ACCRETION CALL AddSinkParticle(NewParticle) CALL CreatePointGravityObject(NewParticle%PointGravityObj) NewParticle%PointGravityObj%x0(1:nDim)=GlobalParticles(1:nDim,i) NewParticle%PointGravityObj%soft_length=1d0*sink_dx NewParticle%PointGravityObj%soft_function=SPLINESOFT
These lines initialize a new particle and associate with it a point gravity object. Now, there is a distinction between sink particles and point gravity objects. To my understanding, sinks can accrete mass and move, but point gravity objects can only exert a gravitational force. Thus when one wishes to accrete mass onto a gravitational point, one should initialize a sink + point gravity object. (Whereas if you only cared about exerting a force, but no need for accretion onto the particle — accumulation of matter in the surrounding cells would be fine — you could use just a point grav obj).
So, I added these lines to my problem.f90 in the routine problem module init, below the ambient and clump objects. To use these routines, I also added a USE Particle Declarations at the top of the file.
IF (.NOT.LRESTART) THEN
NULLIFY(Particle)
CALL CreateParticle(Particle)
Particle%lfixed=.true. !keeps particle fixed at xloc
Particle%xloc(1:ndim)=0d0
Particle%iAccrete=1 !Fedderath accretion, as given in particle def
Particle%moments(1:3)=0
!check print statement for moments
CALL AddSinkParticle(Particle) !accesses particle_
CALL CreatePointGravityObject(Particle%PointGravityObj)
!accesses pointgravitysrc
Particle%PointGravityObj%x0(1:ndim)=0
Particle%PointGravityObj%soft_length=1d0*sink_dx
!not sure about this
Particle%PointGravityObj%soft_function=SPLINESOFT
!default mass = 0 as defined in the type
! starts gaining mass when surrounding gas is jeans unstable for fedd accretion (should coincide with same time a sink formed in previous simulations..)
END IF
The grid is then initialized with this point particle at the origin. It lives there, massless and with 0 momentum throughout the simulation until it begins to accrete. I found that I did not have to explicitly 0 out the momentum though, as checks were already in place that did this for a fixed particle.
To modify the accretion, in finalize accretion routine, I multiplied dq*8:
Particle%Q(1:NrHydroVars)=Particle%Q(1:NrHydroVars)+Particle%dQ(1:NrHydroVars)*8d0
Now the sink will be accreting 8 octants worth of material
Now that the edits have been made, the overall structure of the code goes like this —
1) sink + point gravity obj is initialized on the grid
2) code goes through pre-update that no longer checks for new particles, but for the one particle in the particle list (added to it in the problem module) it does accretion, synchs accretion, and finalizes accretion. In the do accretion routine, the code checks whether the jeans criteria has been violated around the sink particle. If it has, then the accretion takes place. Fedderath accretion removes just the amount of gas from the cells to make the cells no longer jeans unstable, and adds that gas to the particle. This is the type of accretion I am using. Away from the sink, one expects a bondi flow to be reached..
3) Code Finalizes accretions where it zeroes out particle's momentum, and adds new values to the q array for accreted mass, energy (multiplied by the factor of 8).
And this all repeats for the next time step.
Meeting Update Mar 11
- To prepare for implementing FLD, walked Jonathan through the implicit diffusion solver / subcycling codes in my latest development version of AstroBEAR. We modified some of the communication routines and how the boundary temperature and temperature derivatives are ghosted along the discussion to a more reasonable way. This may have something to do with the AMR noise we have seen in the previous testings (see earlier blog post). After the modifications, I found NaNs during testing the code. So I have been working on ensuring the correctness of the modified code. Hopefully the modified code can behave better on the ablative RT problems.
- TSF working on making stampede to work. I was able to compile the code successfully, but unable to run the code both on development and normal queue. The error message has been attached to ticket 273.
Meeting Update 03/11/2013 -- Baowei
- Tickets:
- New: None
- Closed: #274 (Question on AMR)
- New user from China asking the code through Download form.
- Projects for CS course:
- Asking for videos for youtube channel & Promo movie:
- New movies uploaded to youtube: http://www.youtube.com/user/URAstroBEAR
- Out of Memory Issue on Blue Streak
- Latest revision installed on Blue Streak: hypre-2.9.0b-MPI-XL-No-Global-Partition with optimization flag O3
- Teragrid allocation
- Stampede: 416,000 SUs:https://astrobear.pas.rochester.edu/trac/astrobear/blog/bliu03082013
- Kraken: 1,138,234 SUs
About Stampede Time and Slurm
Some useful information about stampede queue:
Queue Name | Max Runtime | Max Nodes/Procs |
---|---|---|
normal | 24 hrs | 256 / 4000 |
development | 4 hrs | 16 /256 |
large | 24 hrs | 1024 /16000 |
Details can be found at http://www.tacc.utexas.edu/user-services/user-guides/stampede-user-guide#running-slurm-queue
Our total Allocation is 416,000 SUs (CPU hours). Suppose all our jobs run on 1000 CPUs, totally we have a little bit more than TWO weeks.
To submit a job to 1000 cpus in normal queue on Stampede for 24 hours, here's an example
#SBATCH -A TG-AST120060 #SBATCH -J bearRun #SBATCH -n 1000 #SBATCH -c 1 #SBATCH -p normal #SBATCH -t 24:00:00
The system can decide how many nodes for you or you can specify with option -N
The complete slurm script can be downloaded from ticket #273.
Sink lessons 2
Whoops.. pushed back arrow and just lost all of my post… grrrrrrr.
As I was saying before being so rudely interrupted…
I learned that currently a sink accretes from ghost zones by a) reducing mass in the ghost region, WITHOUT b) increasing mass of the sink. This is because of the possibility for "over counting" the accreted mass onto the sink if the ghost zone is an internal boundary, hence overlapping with actual physical cells. Q) Do internal ghost zones then get their mass information from overlaid physical cells?
If your sink was near a physical boundary, however, you'd like accretion to be a little different now — such that it both a) reduced mass in the ghost region and b) accumulated onto the sink, since issue with over counting is no longer valid.
Astrobear2.0 as of now does NOT make this distinction — should perhaps be added.
Now I also learned a bit more about the force algorithm — will be updated here soon..
All about gamma
In a multi-species gas - one can have different gamma's associated with different species. Combining them to give an effective gamma is not entirely straightforward.
One might expect a particle number weighted gamma might be appropriate
but it turns out that the value that should be weighted is the degrees of freedom
There is a relation between the degrees of freedom and the adiabatic index
so an effective gamma can be derived by taking
Learning about sinks in astrobear
Working on understanding sink implementation in the code.
Here is a running dialogue on what I am learning and questions that I have.
The Problem -
1) Currently, the BC's for sinks seem to be set to extrapolating, which is why the sink can move off of the grid… and into the ghost region. Q) Can sinks in the ghost region interact with gas/sinks in the grid?
2) If made them reflecting, once sink would hit the boundary, it would stay at the boundary (since all momenta would be equal and opposite at that point). Q) What are potential negative effects to treating problem this way when there are multiple sinks?
3) How to modify accretion when the sink interacts with reflecting boundary?
Code -
1) amr_control.f90
In Line 266 of AMR_control.f90, in the subroutine LevelsInit, SinkParticleInit is called which lives in Particle_control.f90:
a) SinkParticleInit used for initializing a sink from a problem module — there is a warning about periodic BC's and potential accretion troubles here that I'd like to understand better.
Then on line 473, in Recursive Subroutine AMR(), call ParticlePreUpdate, which
a) clears particle momenta for level n (dmom is an (maxdepth (set=16 in globaldec) x 3 ) array defined in pointgravitydef, px, py, pz, b) If you're on the finest grid (n == maxlevel), then moments are calculated and synch'ed, c) check for new sinks and add any you find to the particle list, d) update accretion for the sinks.
Line 474 calls ApplyPhysicalBCs, then next is ellipticBCs Then we advancegrids (line 530)
Then we call ParticlePostUpdate, which
a) if lsinks, synch's gas forces for particles on each level, and advances particle if on finest level. if lparticles, and on finest level, calls UpdateParticleObjects. Q) What are these handles, lparticle and lsink?
ADVANCE PARTICLE SUBROUTINE -
a) begins by allocating arrays for velocity, position, mass, gas_accel, sink_accel, lfixed (not sure what this is) for each particle, b) next populates arrays from particle%Q(imom(i:ndim)), particle%xloc(1:ndim), etc. Q)How does imom array get filled? Considering adjacent cells? Boundary cells also?, c) next it updates the velocity array using sink_accel by using what appears to be a mean acceleration*dt — vel = vi + dv/dt*(½)*dt. How it computes sink_accel is the matter of GetGasAccel and CalcSinkAcc — so want to check these routines for the potential of boundary effects.
UPDATE PARTICLE OBJECTS -
In this subroutine (here on SR) in particle_control.f90, an associated particle's velocity, position, time, and mass is updated. What I think we are interested in modifying here is the particles velocity (momentum it seems, so why called v?), which as can be seen in this SR goes like:
Particle%PointGravityObj%v0(1:nDim)=Particle%q(imom(1:nDim)) — here, does the array imom contain momentum from ghost cells?
Questions -
1) How to get a center, stable, accreting sink in an octant simulation with AMR?
2) Does each sink have a surrounding mesh separate from the numerical domain?
3) Does a sink form first at a cell center, but then is unrestricted to r? Yes.
4) What are the order of steps? Hydro, source updates (here included are the sinks in particle list), elliptic? Why sink between hydro and elliptic? Where do pre and post update fit into this order?
5) What is n's range? N is the patch level. It runs from -2 to max level of AMR. -2 is the "container" for the info node, -1 is the domain (can have many domains in astrobear1.0, but not implemented in astrobear2.0), 0 is course, and 1 + correspond to the different amr levels. Info nodes -2 and -1 do not contain data structures.
Updated BE paper -- Finished Results/Figures
Here is a link to the most updated version of the BE paper with finished results section and figures/captions.
https://astrobear.pas.rochester.edu/trac/astrobear/wiki/u/u/erica/BEPaper
Meeting Update
Adiabatic BE sphere —
I revisited the Lane Emden derivation for isothermal sphere and then considered how it would change under adiabatic conditions, outlined in this blog post: https://astrobear.pas.rochester.edu/trac/astrobear/blog/erica02262013
Basically, making the sphere adiabatic does not change its initial configuration, and so no change to the problem module needs to be made.
Running the BE problem module out past sink formation —
I made a page on the results I saw with sink formation in the Matched and BP cases: https://astrobear.pas.rochester.edu/trac/astrobear/wiki/u/erica/BEMar1 This set up is going to need some work if we want to track post sink formation.
Paper —
I have been working on my paper, added some new text on the results section and began outlining discussion section
Coding —
I would like to revisit my Godunov solver this week and check the BC's. I began reading the next chapter on Euler equations with Godunov method. I believe my next step will be to use the Godunov method to solve Euler equations in 1-D. Next, I will adjust the code to be higher dimensional, then will begin learning about a Poisson solver and Hypre.
Movie —
We would like to begin shooting movie material this week. Here is an example of what the end product may look like — http://www.youtube.com/watch?v=GID2K3EiiI8
If we have the green light on script questions, Will would like to interview/videotape Jason TODAY (Have you guys been in touch?)
He would like to repeat some of the questions for everyone (such as, what does astrobear do?)
Adam, when would you be free to shoot your part?
Martin and I are free when this week to shoot supporting material?
Do we have an official PR release date set yet?
Science Outreach —
Meeting at East High this Friday again for an hour of Physics tutoring.
Meeting Update 03/04/2013 - Eddie
Progress Last Week
- finished first round of 2.5D pulsed jet runs with the Raga set up ehansen02272013
- set up 2.5D projections to create emission maps
- started the strong cooling jet project again (Francisco & Imperial College)
- implemented new cooling table and set up the problem with the experimental parameters
Stuff for This Week
- finish coding NEQ cooling
- adding / adjusting a few more tables for some of the He processes
- run some 1D shock models with the finished cooling routines to compare with Pat
- possibly rerun the Raga set-up for 2.5D pulsed jets, or move on to new simulations (maybe random pulses or Kris' set up)
- get the strongly cooling jet with lab parameters running and analyzed
Meeting Update: Monday March 4, 2013
Working on server - Rails provides an excellent framework, but comes at the cost of a steeper learning curve.
Progress Web server that displays video.
Next Include web forms to modify the Ruby module classes.
Meeting Update 03/04/2013 -- Baowei
- New Revision:
- 1241:70f57b1e4434
- Mainly bug fixes found on blue streak
- Tickets
- Stampede is all set (#273)
- Trac/wiki updates
- History of AstroBEAR (Upload to youtube?)
Meeting Update Mar.4
TSF: For GLMYG setup, we can derive from the information in the paper that mu = 1.0357. The following profile can be derived too:
density | parts/cc | temperature | |
center | 9.8e-20 | 5.7e+4 | 20 |
edge | 6.831e-21 | 3.973e+3 | 20 |
ambient | 1.725e-22 | 1.00328e+2 | 792 |
For isothermal, crossing time = 715 kyr, shock crossing time = 7.15 kyr. We study the cloud stability in the following three settings:
isothermal:
http://www.pas.rochester.edu/~shuleli/0304/gn_iso.gif
addiabatic (initialized by isothermal profile):
http://www.pas.rochester.edu/~shuleli/0304/gn_addi.gif
with DM cooling (initialized by isothermal profile):
http://www.pas.rochester.edu/~shuleli/0304/gn_cool.gif
Boss setup, Mach = 3:
http://www.pas.rochester.edu/~shuleli/0304/bo_mach3.gif
Boss setup, Mach = 5:
http://www.pas.rochester.edu/~shuleli/0304/bo_mach5.gif
Boss setup, Mach = 10:
http://www.pas.rochester.edu/~shuleli/0304/bo_mach10.gif
Next steps:
- run GLMYG setup with wind in low res setting.
- move to stampede to finish high resolution runs. I got access to stampede last week, but there seems to be permission problem when compiling the code.
FLD: I'm currently working on the implicit code that will be handed to Jonathan to do flux limited diffusion.
LLE: You can now view the LLE presentation about resistive clumps online:
http://www.pas.rochester.edu/~shuleli/0304/lle3113.pptx
(This is written in PowerPoint2013. For PowerPointX(X newer than 2007) users, the pptx version is recommended.
OpenOffice, Keynote users: converted ppt version will be uploaded soon).