Meeting update
Updates are posted on the accretion project page, here:
https://astrobear.pas.rochester.edu/trac/wiki/u/erica/SteadyStateAccretionProblemPage
Localized Bondi dev update
Developed a solver for determining which branch of the local Bondi flow kernel edge corresponds to, and integrates the equations inward, and pastes the soln on the grid.
See the different possible soln classes in attached PDF.
To test the solver have a problem module that specifies an outer boundary condition of the flow (constant values at r≥1), which is internal to the grid.
For this boundary, the code figures out which branch to integrate the equations on and pastes the steady state soln within r<1.
I let the sim run for 20 frames using this set up (holding the boundary values of the problem fixed for r>1), using the Krumholz accretion module (see attached rho.gif for a movie, and the pdf for the initial conditions).
I need to finalize some things with the new sink accretion algorithm to utilize this framework. Things are mostly built with that so shouldnt be much longer. Need to come up with test problems.
meeting update
increased resolution of supernova core collapse problem… am getting funky boundary issues and poor HSE in center core.
possible fixes:
-integrate outward from center instead of inward from ambient for HSE routine
meeting notes
Am trying to interpolate the following 1D radial profiles onto Astrobear's grid in 3D octant symmetry:
These profiles represent a ~5 solar mass progenitor for a core collapse supernova. The red vertical line is the radial cut-off I am taking for initialization onto the grid. Beyond this radius am switching the profiles to constant ambient values.
These profiles are assumed to be in ~HSE. Thus, I used the HSE module in AstroBEAR for self-gravitating gas to produce the following grid-interpolated profiles:
Neutrino cooling is being mocked up in the code using the empirical formula:
For these initial conditions, this works out to be ~
erg/s for the peak density. In computational units, this is .The initial internal energy of the central zone is,
So if the cooling rate were constant over
, would expect to see the internal energy drop to . Here is a plot of the internal energy over dt:In other words, if the cooling time is
, then within this timestep, .The HSE module produces this:
Pretty close, except at origin…
After freefall time (or 38 cooling times):
The mesh shows strange behavior on the reflecting sides
And no sink particles being used,
Bondi update - April 30
Two Bondi runs: gam=1.0001, gam=1.4
In each case, the Bondi radius (GM/cs_inf2) is resolved by the same number of zones (~160).
Gam=1 doesn't blow up, but gam=1.4 does. It seems this can be attributed to adiabatic pressure increase in the kernel (due to mass pile up) that sends out a shock wave upstream into the Bondi flow. Does the pressure increase in the kernel support this?
Movies are here:
gam=1.4
Density: gam1pt4_rho.gif
Pressure (ram vs. thermal): gam1pt4_press.gif
Speeds (sound vs. vmag): gam1pt4_speeds.gif
gam=1
Density: gam1_rho.gif
Pressure (ram vs. thermal): gam1_press.gif
Speeds (sound vs. vmag): gam1_speeds.gif
A comparison of the density and pressure at frame 0 and 1 for the different gamma cases is here:
And pressure is here:
These are the same physical time. I think the dynamical time is set by the gas properties at infinity, in which case these should also be the same dynamical time…
Lastly, mdot converges to its theoretical value in the gam=1 case, but not gam=1.4 case:
The Bondi and accretion algorithm mdots are calculated from different expressions. Classical Bondi mdot is:
whereas the Krumholz prescription uses:
Page on SN shocked clump
https://astrobear.pas.rochester.edu/trac/wiki/u/erica/scratch10
Work in progress
Review: Conserving angular momentum over accretion
Enforcing mass and linear momentum conservation means that angular momentum will generally not be conserved over the accretion step. We can see this by first noting that (by mass conservation) the sum of the mass to be accreted from the kernel (LHS) is equal to the mass of the sink particle after the accretion takes place (RHS; post-accretion quantities are denoted by a prime):
Similarly, momentum conservation says that:
Note, this last equation gives the velocity of the center of mass of the accreted material, which must equal the velocity of the sink particle, given momentum conservation.
From here, we might be tempted to write:
However, we already know that the mass of the sink and velocity of the sink are
and , respectively. Thus we have to write:and ask for what R' is this equation valid. Rearranging this equation (and calling the LHS
), we can write:In general,
and will not be perpendicular, thus there will be no solution for .What if we instead let
equal the center of mass of the accreted material? Then, the equation for angular momentum conservation becomes:
Equality then requires that
, which also will not generally be true.Thus, the sink angular momentum cannot strictly be set equal to the accreted angular momentum, if the angular momentum is to be conserved across the accretion step. Instead, we need an additional vector which can absorb the difference in the angular momentum. This is the reason behind devising a 'spin' angular momentum vector for the sink particle. With the spin vector, the total angular momentum across the accretion step can be conserved.
Using a spin angular momentum vector to conserve angular momentum
As discussed above, the sink angular momentum following accretion (
) is set by the accreted mass and linear momentum from the kernel. It is given by:
As shown above, this angular momentum vector will generally differ from the total angular momentum accreted from the kernel (
):
Thus, we can't simply set the particle's spin axis equal to
. Instead, we want to look for some function such that the total angular momentum across accretion is conserved. We call this vector the particle's spin. The spin can be found by enforcing conservation of the total angular momentum across the accretion step:
This equation shows that the updated total angular momentum of the sink = the old total angular momentum + the accreted angular momentum. By rearranging this equation, we can solve for
, which absorbs any excess angular momentum over :
Bondi accretion module mods
I'll be implementing two changes to the Bondi Accretion module. The first will be a flag which will force sink particles to only accrete the *radial* momentum of gas (see previous blog post for motivation). The second is a check that the accretion volume does not contain any cells that violate the Jeans condition following Bondi accretion. If so, the excess gas will be removed.
Radial momentum accretion fix: description and equations
Here is a diagram defining the relevant variables. Note, in what follows primes (') will denote accreted quantities.
The momentum vector of each cell in the accretion volume is decomposed into its radial and transverse components (relative to the location of the sink particle):
Following accretion, only the radial component of the momentum will be accreted:
where
and
(i.e. fm = the fraction of mass to be left in the cell following accretion; the radial momentum will be accreted proportionally to mass).
Subtracting the first two equations gives us an equation for the update of the momentum vector:
which in cartesian coordinates is:
This last equation gives the accreted momentum in each cell within the accretion volume, for the case of radial momentum accretion.
'The Angular Momentum Problem' of Sink Particles
So I think I digested the Krumholz accretion paper sufficiently well and will attempt to distill its key points regarding angular momentum accretion onto a sub-grid sink particle, as well as how this may ultimately connect to outflows and jets launched from sink particles.
The Krumholz accretion algorithm is fairly sophisticated in that it calculates a modified Bondi accretion rate for rotating gas within the accretion volume, and then only removes gas from each grid point that is bound to the sink particle (
).Currently, in AstroBEAR, the Bondi accretion routine also removes linear momentum and specific internal energy from each grid point based on the fraction of mass this drho constitutes. In doing so, each component of the linear momentum is reduced by the same fraction. This reduces the magnitude of the momentum vector in the ith cell, but not its orientation. In other words, this results in the loss of angular momentum within that zone, and thus over the kernel:
where
is the total angular momentum over the kernel, and the subscripts denote the given quantity in the ith cell. Note the radial vector points along a line connecting the cell center to the sink particle.This loss in angular momentum is stored in the sink particle, which enables the accretion algorithm to effectively conserve mass, linear momentum, and angular momentum between the grid and the sink particle over an accretion step.
Now, onto the angular momentum problem of this routine…
Krumholz makes the argument (and after some thought I think I agree with him) that given the sink particle represents an object much smaller than the grid scale (i.e. think single star compared to protostellar core) the angular momentum of the sink particle is negligible to that of the gas on the scale of the accretion volume. For a rough idea, let's assume the accretion volume represents a protostellar core, than its angular momentum is approximately:
whereas the angular momentum of the sun is roughly:
This argument (known as the 'angular momentum problem of star formation,' c.f. this review) implies single stars simply cannot acquire the enormous angular momentum of their host cores and clouds: else they would rotate faster than their break up speed.
Thus, Krumholz decomposes the linear momentum vectors of each cell into a parallel component and a transverse components (wrt to the line connecting the sink particle and cell center) and only accretes the radial component of the momentum. That is, since the angular momentum of the object the sink particle represents is negligible compared to that of the accretion volume, it should be ignored. By only accreting the radial component of the momentum vector the angular momentum of the gas is preserved (as well as the radial velocity).
Connection to outflows
If one does not accrete angular momentum from the grid, then one cannot inject angular momentum into the grid from sink particles upon the jet launching step. Thus, if we choose to adopt this way of looking at things, our injected protostellar jets will not be rotating on their own (but perhaps will spin up due to the bulk motion of gas within the accretion volume).
This contrasts with the other school of thought, namely Federrath's sink particle algorithms (which are currently implemented in the code). This method DOES accrete angular momentum from the grid (as described above for the Bondi accretion routine), and thus under those conditions, injecting a rotating jet back into the grid should (hypothetically) be self-consistent and conservative.
Bondi Accretion Routine Review
The Bondi accretion routine begins by calculating the Bondi radius of the sink particle's host cell using the following equation:
RB = GM/(cs2+vs2)
This equation gives the sonic surface of infalling gas onto a moving point particle.
Next the routine defines a region around the sink particle (on the finest AMR level) from which gas will be accreted. Within this region, the accuracy of the hydrodynamical solution is lost and so we want to minimize its size. However, we want to accurately-enough resolve inflowing material into this accretion region. To balance these two effects we choose a radius = 4dx. This is called r_acc in the code.
The amount of gas removed from the cells within r_acc is given by the accretion rate * the timestep. This accretion rate should be carefully chosen when the sonic radius lies within r_acc. This is because the accretion rate sets the resultant pressure in the kernel, and so if the accretion rate is too high (the resultant pressure too low) the hydrodynamical solution in cells surrounding the kernel could be degraded in the case of subsonic inflow.
In the other limit: R_BH > R_acc, then too high of an accretion rate shouldn't matter so much. The only issue I can foresee is if the accretion rate is too low, in which case gas would pile up within the kernel and eventually artificially fragment since new sink particles are not created within the accretion region of already existing particles. For this reason it might make most sense to allow sinks to form within the accretion regions of already existing sinks (if R_BH> R_acc) and then immediately merge (within a time-step).
Next the accretion rate is calculated using a semi-analytical expression found by Ruffert & Arnett (1994). See Krumholz 2004, eqn. 11. To calculate rho_inf in that equation, we calculate an average rho within the kernel using assigned weights to each cell within the kernel. These weights depend on the ratio R_BH : r_acc and effectively allow cells with higher inflow speeds to contribute more mass to the accreting sink particle. Correspondingly, these weights are then used to calculate the mass removed by any cell within r_acc during the accretion step.
Now this is all well and good if the gas is not rotating within the kernel. However, If it is rotating then the mass should not be accreted spherically symmetrically from all cells within the kernel. To calculate a correction due to rotation, each cell within the kernel is broken up into 83 sub-cells each with 1/83 the mass, momentum, and energy of the host cell. The minimum distance between the sink particle and these sub-cells is then calculated. If this distance is not less than dx/4, then that sub cell does not contribute to the accreted mass (it has too high an angular momentum to ever fall onto the surface of the sink). Additionally, if any of the cells within the accretion region is unbound, it does not contribute mass to the sink particle. Note, there is a maximum amount of mass any cell can contribute to the sink particle of 25%.
Now with regard to momentum changes to cells within the accretion radius, I was surprised to learn that the Krumholz's model for Bondi accretion reduces the radial momentum only (along a line between cell center and sink particle), in order to preserve the *radial velocity and angular momentum of the gas*. This is because the sink particle is envisioned as being so small in comparison to the grid size that its accreted angular momentum is negligible.
Meeting update for April 18th, 2017
Feedback
Almost ready to start pulling out my feedback development branch and finalize testing/development for that. At this point we are at a crossroads — where we can either
a) improve the algorithm to simultaneously inject the target amount of linear and angular momentum into the outflow codes (currently the code has trouble with getting the right amount of angular momentum into the cones),
b) ignore angular momentum injection altogether (would limit accurate simulation of outflows on small scales, I suppose),
c) leave the code as is, which can introduce some potential wonky behavior of the outflows under certain circumstances (such as a counter rotating accretion disk/star system)
Reorientation Paper
The referee had a few comments for the paper before it could be accepted for publication. Namely, he thought there should be a convergence study in the paper (sims are currently running for this), suggested adding a infinite cooling case to the paper (which we are making a counter argument against, see attached draft of referee responses), and remaking some of the plots so that they have larger text (currently working on this now, should be finished by the end of the day).
The referee responses are due no later than May 9th, but would like to get them in by the *end of the week*. Since the report is so short, thought I would definitely run it by Jonathan at the very least before submitting.
Thesis Defense
I still need to finalize the last person on my committee and to check with Segev again about the scheduled defense date of August 4th.
In order to defend my thesis by August 4th, I need to have a copy of my finished thesis to the committee by *June 26* (almost two months away). How long do these things typically take to write? If it takes 3 weeks, then that leaves me a little over a month to get the feedback sims up and running for the next paper (before I have to hunker down and write my thesis).
Thermal Instability Presentation
I recently read Koyama & Inutsuka 2004 on the "Field Condition" (convergence condition for the thermal instability), and really enjoyed it. Put together a short presentation on this paper to share with you all (see link here: https://docs.google.com/presentation/d/1uaQ5ukZL6sNtdwxfubdH_EHZIQfWY8EESwC1P0QtV9I/pub?start=false&loop=false&delayms=60000).
How cooling is called/used in the code
The code's main program is scrambler.f90, of course. In it are the initialization sequence calls, like 'physics init'. One of the things physics init does is call cooling init.
Cooling Init is in Cooling.f90. It is commented to say "initializes cooling table", but actually doesn't do anything… (It is a stub). Perhaps it is the relic from an older version of the code.
Now, the main meat of the cooling functionality goes like this.
In source_control.f90, there is a function called "SrcDerivs". This function's main goal is to populate the dqdt array with the code's source functions. Dqdt array contains geometrical source terms (like cylindrical corrections), energy source terms (like cooling), etc. At the end of this function, we have, "SrcDerivs=dqdt" (arrays).
If cooling is turned on, this function SrcDerivs calls the function "Cooling" (located in the cooling module: cooling.f90). "Cooling" returns a dqdt for SrcDerivs, specific for the type of cooling that is switched on.
For instance, take IICooling. "Cooling" selects the type of cooling that is present with a case statement:
If Case(IICool), Call II_Cooling
Now, IICooling is located in !IICooling.f90 and it calculates a dqdt(iE) based on a heating/cooling rate which themselves are functions of the II Cooling Parameters (stored in the array !IICoolPars). Once dqdt(iE) is calculated for this cooling choice, it is stored in the iE's element of the array SrcDerivs.
SrcDerivs array is used to update the q array in the subroutine, "RKorder2" back in source_control.f90. This is something like: q(i)=SrcDerivs(i)*dt + q(i).
The procedure would be similar (i.e. SrcDeriv and q would be updated in a similar manner) if another cooling process was selected instead of IIcooling.
Thus, it would seem, the only things necessary for turning cooling on in the code are the following two steps:
- Select the type of cooling in physics.data, e.g. iCooling=3 (for II cool) (as long as iCooling/=0, cooling should be turned on in the code)
- Define any cooling params to be used by cooling module by adding these to both the problemdata namelist in problem.f90 (e.g. IICoolPars() array for the case of II Cooling), as well as define their corresponding values in problem.data.
You should now be ready to cool!
Nov. 2nd, 2016 Meeting Update
Need letter to YCAA today
Questions on Hubble Proposal Draft
Paper list on zoom-in: http://adsabs.harvard.edu/cgi-bin/nph-abs_connect?library&libname=Zoom-in&libid=56b0f4173b
Have an interview next week at Univ. of Hertfordshire. This is with: Jim Dale, Martin Krause, Martin Hardcastle (as Director of the Centre), and Janet Drew (panel chair, and PI of the grant)
Dr Strange this weekend?
JOBS!
Stuff is CRAZY right now — I'm in the thick of job applying.
In terms of science, the referee report seems good for the reorientation paper. Will be working on that in the coming weeks.
meeting update
Rotating, supercritical Bonnor Ebert sphere forms a disk, protostar, and outflow.
Here is the density right before the outflow leaves the sphere:
right after:
Looks like a RT instability along jet axis.
Radial velocity (ignore the legend label — it's wrong) shows that the kernel is not directed outward, but beyond that, it is… Not sure why yet. Infall is happening in the disk though, which is as expected:
The temperature shows the outflow is decades larger than the core. This may need to be adjusted:
A movie of density can be found here:
Some results from outflow testing.
The following are plots of the outflow kernel, that is, the profiles of injected density and velocity of the two-component sub-grid model:
In fixed grid we see a very nice outflow. Here is the density (rho_outflow=rho-rho_amb):
But in AMR, the outflow looks like:
This particle has a buffer around it of finest cells that is r=16 zones=kernel of outflow. We still see edge effects however. I tried making the buffer larger, to r=32 zones, but this doesn't impact the mesh strangely:
Looking at the outflow velocity (which is along z, the particle's spin axis), we have in fixed grid:
And in AMR with 'r_sink=32',
I also played with a collapsing BE sphere. Here is a movie of a marginally stable BE sphere that is perturbed to collapse by a density enhancement. The outflow of the particle is not fixed, and so I think it is aligning to random noise in the accreted spin ,which is why it looks off-center by the end of the movie. We should probably initialized the BE sphere with some rotation.
Movies of high res outflow
Attached to this post.
Tests of the feedback routine
Using the Bondi routine with the particle's position fixed:
Mass conserving? | yes |
Can it run in AMR? | |
Can it run on multiple processors? |
Using the Bondi routine with the particle's position fixed:
Mass conserving? | yes |
Momentum conserving? | |
Angular momentum conserving? |
Using the uniform collapse module:
Meeting update
Fixed a few bugs with the outflow feedback routine and test module. Outflow looks to be qualitatively reasonable. Next have a suite of tests to run to try and break the new routines.
For now, am starting with the bondi module which prescribes a bondi flow onto the origin given a central mass. The central sink particle is .4 solar masses. It is accreting via bondi accretion routines and is launching an outflow at .50 efficiency (that is, 50% of the accreted mass is launched in the outflow). The opening angle is 30 degrees.
meeting update
sad eddie is gone
other than that, finished the newest draft and am next turning towards testing the outflow module jonathan installed.
Meeting update
Paper editing. Just a couple of points:
Is it ever okay to use just the word 'mach'?
e.g. "The mach of the flows ranged between X and Y".
Also — for the methods section, why was II questioned? And also, is my wording about the box size confusing?
Meeting update
- Melissa Morris reached out and told me she is submitting a proposal for the Emerging Worlds NASA project, and thinks I would be perfect for the project. If it goes through, she would like to offer me a postdoc at Suny Cortland. The project would be studying chondrule formation via large scale shocks in protoplanetary disks around young stars, using radiative transfer.
- Made a cv (find it here: http://www.pas.rochester.edu/~erica/cv.html)
- Poster got accepted (but not talk ) For upcoming "STAR FORMATION 2016" conference in Exeter
- Reorientation paper coming along nicely
- Spoke with Jonathan about beginning to build the outflow feedback routines. We said we can plan on that in the next week or 2.
- Submitted the shear flows paper to be published in Bo Reipurth's monthly Star Formation newsletter, cf.: http://www.ifa.hawaii.edu/~reipurth/newsletter.htm
- Thinking of reaching out to some professors at University of Missouri, St Louis, about meeting or giving a short talk while I am here for the week. It seems like 2 of them are sort of star-formationy. http://www.umsl.edu/~gibbe/Site/Research.html and http://www.umsl.edu/~wilkingb/
Meeting update
Writing the MHD shock roerientation paper. Expect to have a first cut of the paper through the results section by the end of the week. Will like to discuss this draft next week and next steps for the paper.
Would like to look through the plots of the 2D runs in the meeting and decide on a set to focus on.
Oh, and reorienation did not occur as strongly in the 60 runs as we saw last week when I made the box big enough to contain all the field lines. At this point, am seeming the strongest reorientation occur in the beta=10 cases (not the beta=1 case like we saw last week).
Some thoughts on oblique shocks (both hydro and MHD)
1D Hydro Oblique Shocks.
Here is the pseudo color plot of rho with velocity vector field:
Here are lineouts of fluid vars:
The initial conditions have no gradients in y (the Riemann discontinuity is in x). This means the d/dy term in the y-momentum equation go to zero, and py is essentially advected through the grid passively (this would explicitly happen if nDim=1 in the code). There are periodic boundary conditions in y, as well, so this is effectively an infinite 1D problem.
Now, because there are no pressure gradients in y, we would not expect vy to change across the outer shock front. Instead, the only thing that could happen is a change in the x quantities. This is consistent with the lineouts above.
Also interesting is how the component of the velocity perpendicular to the shock front (vx) goes exactly to zero across the shock. At first I thought this must again be because there are no gradients in the setup that would drive changes in vy across the shock. So, if vx did not go to zero across the boundary, vy would change. Thus, vx necessarily must go to zero across the shock, regardless of mach, angle, etc. However, upon further thought, I don't think this is valid. Vy is a parallel component to the shock, and thus should be constant across the shock no matter what. So why does vx go to zero?
1D MHD Oblique Shocks.
The case is a little different in MHD. In MHD, each of the fluid velocities are kept in the Euler equations for our solvers, even when nDim=1. Is this because now, there can be gradients in y generated from the field, even when none might exist initially? We certainly see different behavior in the lineouts for the MHD case. Where there was no change in vy across shocks in 1D hydro, there are changes in vy across 1D MHD shocks:
Some filament/magnetic field observations
The integral-shaped cloud, and field
The following images depict the integral-shaped filament in Orion-A, and some have corresponding magnetic field vectors overlaid. One hypothesis is that this filament is wrapped by a helical magnetic field.
This filament/field morphology also bears striking resemblance to an oblique, reoriented colliding flow (60-degrees):
Herschel view of Taurus
Here is another example of a well-studied filament and corresponding field topology. I am pretty sure I've read that the uncertainty on the field vectors increases in the densest regions of the gas.
Reorienting MHD Colliding Flows
MHD, w/cooling, mach=1.5, beta=1, beta_ram=3.8
30-degree angle
(Movie here)
Here we see that the outer shock stalls on the top right and bottom left of the collision interface. This has the effect of 'reorienting' the shock front over time, so that it becomes normal to the magnetic field. Internally, it takes much longer for the internal collision surface to reorient, and then, it only becomes normal to the flows near the flow/ambient boundary. That happens to coincide with the field becoming parallel to this inner shock layer. Late in the sim, we also see what appears to be magnetic field waves, leaving the collision region and heading toward the flow boundaries. The entire interface then becomes unstable, and begins to break down.
Note also that while the 1D solution (without cooling), as well as the cooling case of this run, show velocity vectors going to zero across the slow shock, the present case with cooling shows strong velocity vectors 'falling onto the main filament'. This must be do to the cooling/compression of the gas. This motion in the cooling case leads to a more kinked field.
60-degree angle
Wow, we get much stronger re-orientation in this case, and in fact, it seems from a slightly different mechanism — one in which the deformity of the field plays a big role. Notice in the below movie of rho, that instead of 'blow-out' regions from the central shock zone, we start to see the flow turn around and re-enter the collision region! The flow is following the strongly deformed field. The flow vectors seem to be pushing on the central interface, causing the entire thing to reorient.
The field structure is also fascinating to watch. Over time, we see a central 'line' of black appear… this is the strongest field strength in the plot.
It is interesting that despite the cooling, the shock fronts are so nicely supported. Clearly, they are magnetically supported.
MHD, w/cooling, mach=10, beta=1, beta_ram=38
The strong shock creates strong post-shock pressures that cause a fast ejection of material from the collision region. Because of the strong shock, we also get strong compressions and thus, strong cooling. This reduces the thermal pressure in the collision zone, which triggers thermal instability, and subsequent NTSI, KH, RT modes. The fast shock is not longer supported by either thermal or magnetic pressure, so it collapses down on the thin boundary. If the field were stronger, we would likely see more magnetic support and thus a stronger fast shock front.
Despite the ripples of the collision interface, we do see reorientation occur. While the instabilities are growing in the center, you can see the field 'ballooning'. Oppositely directed field comes together, and looks to dissipate in strength (go from dark to light in color).
MHD, w/cooling, mach=1.5, beta=10, beta_ram=38
These are the same parameters as the 3D MHD colliding flows paper. In that case, the 30 degree shock didn't seem to reorient, but the 60-degree shock did.
30-degree angle
Weakening the field leads to less reorientation. Both the outer shocks and the inner, do not reorient as much as the beta=1 case.
We also see that the ejected material pushes the field out to a further radius, which is expected. This is the same as saying the flow can escape the collision region easier in this case, and this is why we start to see a straightening of the upper and lower 'knobs' of ejected material. They are the only parts of the flow that reorient, and become normal to the flows. The internal collision layer doesn't reorient
movie of rho movie of field
60-degree angle
MHD, w/cooling, mach=1.5, beta=.1, beta_ram=.38
Keeping the mach constant, but increasing the beta, we see that the field is strong enough to resist significant ejection from the collision region. Without this ejection, the outer shocks do not stall, and therefore, we do not see reorientation occur.
MHD, w/cooling, mach=10, beta=.1, beta_ram=3.8
Meeting update
~Papers~
MHD shock paper
Check out outline on sharelatex
Paper ideas (for undergrad project)
Connie says there are no REU students, but that there may be undergrads available if Adam wants to hire them off of his grants?? I did a search and couldn't find any abstracts with the words Bonnor Ebert, radiation or radiative transfer. I think an easy project for a student would be running the collapse of a BE sphere with rad feedback. This could be a very high resolution run (AU scale, and could include MHD), where a disk forms. The questions might be:
- How does M-dot (and hence, accretion luminosity/rad feedback) change with rad feedback (relevant to the "luminosity problem" we were learning about last week)
- How does M-dot and rad feedback change when the BE sphere is embedded in 'realistic, continuous density medium'.
Paper ideas inspired by new Krumholz et al
Disentangling the effect of the outflows on support surrounding young protostars
Studying the role of turbulence and/or density fluctuations (last paragraph of conclusion says that is lacking)
Paper makes conclusions based on drawing spheres around sinks and calculating various critical masses — but can we test these ideas? under what limits do these ideas hold or not? Basically, trying to get a physical handle on the problem. Maybe this can also help with understanding why the accretion flow is not Bondi.
Meeting Update
Mean Vorticity
Query results:
For S0:
"Average value" | The average value of 3D/AbsVorticity is 22.2432 |
"Sample Stats" | Mean=97.3728, Std Dev= 115.999, Variance=13,455.7, Skewness = 1.78141, Kurtosis = 3.66127 |
"Variable Sum" | The total 3D/AbsVorticity is 7.55786e+08 |
"Weighted Variable Sum" | The total 3D/AbsVorticity is 156,936 |
" NumZones" | The original number of zones is 11,114,080. |
"Weighted Variable Sum" | The total rho is 35,091.4 |
"Volume" | The total Volume is 7,055.47 |
Types of mean from query output:
Variable sum vorticity/NumZones = 68
Variable sum vorticity/Mass hockey puck = 21,537.9
Variable sum vorticity/ volume hockey puck = 107,120
Weighted sum vorticity/Numzones = .014
Weighted sum vorticity/Mass = 4.5
Weighted sum vorticity/Volume = 22.24
For S60:
"Average value" | The average value of 3D/AbsVorticity is 37.8458 |
"Sample Stats" | The average value of 3D/AbsVorticity is 37.8458, Mean=7.8336, Std Dev = 45.8268, Variance = 2,100.09, Skewness = 1.35431, Kurtosis = 1.48626 |
"Variable Sum" | The total 3D/AbsVorticity is 8.31277e+07 |
"Weighted Variable Sum" | The total 3D/AbsVorticity is 662,507 |
" NumZones" | The actual number of zones is 4,836,010 |
"Weighted Variable Sum" | The total rho is 35,027.1 |
"Volume" | The total Volume is 17,505.4 |
Types of mean from query output:
Variable sum vorticity/NumZones = 17.19
Variable sum vorticity/Mass hockey puck = 2,373
Variable sum vorticity/ volume hockey puck = 4,748
Weighted sum vorticity/Numzones = .137
Weighted sum vorticity/Mass = 18.9
Weighted sum vorticity/Volume = 37.8458
Meeting Update
Histos
"Density Weighted":
This plot is the one I sent around yesterday.
The issue may be that the y scaling is way off given the mass flux (and total masses) of the hockey pucks vary so much. To clarify, the masses of the various hockey pucks are (S0→S60):
S0 | 35091 |
S15 | 34735 |
S30 | 31640 |
S60 | 16894 |
So, I then defined a new variable that divides rho by the total mass of the puck for each of the runs. This is the "mass normalized rho", and then weighted the histos by these variables.
"Mass normalized rho weighted" (linear bin scale):
"Mass normalized rho weighted" (log bin scale):
Next, instead of weighting by a variable, did the histo by 'frequency':
So, in all of these cases, regardless of y-axis scaling, the max vorticity is highest in the lower shear cases… perplexing :-/
Min/Max Vorticity
S0 | 5e-5/1068 |
S15 | 1e-4/1005 |
S30 | 2e-4/949 |
S60 | .017/310 |
Histo of vx
Instead of vorticity, how about spread in v? Or RMS v?
Here is vx histo for the diff runs:
Shock solutions
https://astrobear.pas.rochester.edu/trac/wiki/u/erica/scratch3
Calculating the M2FR
Link: http://mnras.oxfordjournals.org/content/414/3/2511.full.pdf
Meeting Update
Papers
Referee Report Points:
The ref says, "Here, I list some points which should be added/clarified:"
1. Discussion on the mass-to-flux ratio / inflow boundaries: This should be complemented with a plot showing the total gas mass (and/or the column density) in the simulation box as a function of time. This plot should then also show the total mass-to-flux ratio as a function of time. | ||
2. Protocluster formation and evolution: Again, a plot showing the mass evolution of the sink particles would be very helpful. Also the mass accretions as function of time would be illustrative. | Give a plot of mass 1 Myr after formation. | X |
3. Furthermore, it is not clear why younger clusters (later formation time) have a higher final mass. Please include a discussion (helpful with the above mentioned plots). | Have this plot for the S0 case, will send him this one. | X |
4. The cartoons (Fig. 5) on the magnetic ring model are confusing: It looks like that a ring like structure (resp. spherical bubble) is expanding in the plane of the flows, i.e. in the xz-plane. This is certainly not true (see Fig. 3). For the cartoons it would be better to use two different views, according to the numerical setup, i.e. a view in the yz- and a xz-plane. | ||
5. About the discussion on the impact of shear/power spectra: Here, it would be also useful to show the generated vorticity. E.g., an increasing vorticity could also explain the reduced star formation. |
Other points:
Include citations to work by Federrath and Dobbs | |
Sink plot after 2 Myr | |
Shear plot |
Data we are missing:
Run | Frames |
0 | 273/273 |
15 | 200/273 |
30 | 200/273 |
60 | 329/329 |
Meeting Update
Last week:
- I spent the week so far debugging and testing the rad feedback module, see blog post. I began writing a page on this module as well, here (figured this was useful for new users, as well as my thesis — which will likely incorporate this as a section).
To still do this week:
- Email people back who have gotten in touch with me about paper
- Email people links to my paper (i.e. do some networking)
- Look into Marshak waves and how they pertain to my sink tests
- Address the referee report for the CF paper
- Set up a 2D collapse problem, motivated by different timescales for the radiation equations.
Things I am thinking of putting on the back burner for now:
- Getting Rad Feedback working and tested in 3D, as well as AMR
Things I want to begin working on next week
- Next week I am thinking of switching gears and beginning on outflow feedback. This entails reading the paper again, planning the code revisions, and then of course, coding them.
- Reading papers on radiation feedback.
- Applying for conference talks/posters.
Rad Feedback from Sink Particles Update
This last week I spent testing the feedback code. To do this, I started with a sink particle at the center of the grid and prescribed some energy to be injected into the grid each timestep. Here are some results.
Run 1.
Final time - 1e-4
Energy injection rate - 28052.48 (chosen so that by the end of the simulation a total of 2 Erad will be injected into the grid, where Erad is the total Erad of the simulation at t=0)
Opacities, both set to 1d-6.
This run produced the following graph of Erad vs t, verifying the code is behaving as expected:
That is, we recover the energy injection rate, as given by the 'predicted' curve.
Here is how the radiative, thermal, and total energy behave over time.
A pseudo-color plot of Erad over time seems a little odd though, and I am trying to understand the behavior here. As this movie shows, the min and max remain very close over the simulation (a query shows they only differ in their 7th decimal place), and that by frame 1 there is already a large sphere on the grid. The legend isn't fixed (if it were, would be hard to capture the sphere at all), and so the breathing may be an artifact of the plotting. Why is the min/max so close like this?
I then lowered the final simulation time while keeping the energy injection rate the same. Here is a psuedo plot movie of that, but I see the same behavior. Perhaps the rate is small compared to the diffusion rate so that it all smoothes out within a timestep? Need to look at the different rates in more detail.
In the meantime, I kept the final simulation time small but increased the energy injection rate to play with different opacities. These results I will go through next.
Run 2.
tfinal - 1e-9
energy injection rate - 5,610,800,000 (such that 4*Erad is injected over course of simulation, where Erad is initial total Erad in the grid)
opacities - both set to 1d-6
So as you can see, AstroBEAR is losing some of the energy (supposedly the rad transfer routines aren't conservative??) over the course of the simulation, but performing quite well still.
For this case, the pseudo plot looks quite nice. It is shown in my previous blog post under 'Run 1'. I made a movie that shows how the q variables behave over time for a radial cut of the data. As the pseudo plot suggests, seeing little coupling between the gas and the radiation. For the next run, I will increase the roseland opacity to try and confine the radiation to a small region around the sink particle, but not change the planck opacity so that the gas will remain uneffected by the radiation.
Run 3.
tfinal - 1e-9
energy injection rate - 5,610,800,000
opacities - roseland=1d-3, planck=1d-6
The q-variables are shown in this movie here. For kicks, here are some images of the kernel:
All looks as expected.
Run 4.
Now am trying to couple the gas to the radiation, so boosted the Planck opacity. I found it had to be increased many orders above the Roseland so that it would couple.
tfinal - 1e-9
energy injection rate - 5,610,800,000
opacities - roseland=1d-3, planck=1000
Here is a movie of the q-variables again. And a movie of just Erad here. The pressure, temperature, and total energy grow at the same rate, but density remains flat. I am curious why density is not being affected at all.
Updates on rad feedback
Run 1.
2D, periodic boundary conditions, 1282 resolution. There is a sink particle in the middle of the box that is injecting a constant 'accretion energy' into the center zones of the grid.
Plot is of Etherm (left) and Erad (right) over time,
Thermal energy is derived from subtracting the specific kinetic energy from the total energy (as there are no magnetic fields or gravity in this simulation). The total radiative energy in the box at the start of the sim is, Erad=90. By the end, Erad=10,038.
Both the planck opacity and the roseland are set to =1d-6. The right plot shows that the radiative energy diffuses through the grid much faster than it is exchanging energy with the gas (i.e. changes in thermal energy happen slower). Hardly any change in thermal energy is happening at all, in fact. Unless I made an error with my expression, not sure how the radiative energy can be increasing, yet the thermal and total energy are equal..
There is a slight 'blip' noticeable between the 1st and 3rd frame. Not immediately sure what this is being caused by either..
I would like to check that the total energy is being conserved. To do this, should I make an expression that sums E and Erad and check that it is constant over the course of the sim?
Overall though, the run seems to proceed nicely for these parameters (need to check my thermal energy expression still). I am trying to set up a simulation where I control the rate of energy injection, such that the total Erad doubles over the course of the simulation. In the meantime, experimented with differing opacities, as will show next.
Run 2.
High roseland opacity (radiation trapped, k=.001), low planck opacity (no coupling of radiation and gas, k=.1d-6).
Note, no blip in this run. I see that the thermal energy is not changing much here either. The max Erad is much greater than compared to above. The radiation being confined to a smaller space explains this.
Run 3.
High roseland opacity (radiation trapped, k=.001), High planck opacity (coupling of radiation and gas, k=10).
Now am starting to see an increase in thermal energy. Need to look at the equations and see how the opacities are coming in and how they are related to each other.
Choosing the accretion luminosity for radiative feedback
This plot shows 'distance from the star' on the x-axis, and 'ke®/(GM/R)' on the y-axis, for different starting kinetic energies, ke(r). Ke(r) for the different curves is in terms of the 'freefall energy' at r, where freefall energy = GM/r. Thus, this plot *does not* show how the energy changes as the material falls in, it rather asks the question — if you start at r (x-axis), with energy ke(r) (different curves), what fraction of GM/R will your accretion energy be (y-axis). Note, the horizontal line shows GM/R, normalized to 1.
Let's start by considering the pink line, where a particle starts from rest a distance r away. That is,
From this, we see that even at small distances compared to a typical zone size (~1/100,000 pc), the 'error' is ~.001%. That means, unless your zone size is smaller than this size scale, wouldn't expect any noticeable deviation.
The other lines show if the particle didn't start from rest, but rather some fraction of the freefall speed it would have acquired falling to r from infinity. This shows that for any gas parcels falling onto the sink within the accretion volume, going any speed from 0 to freefall produces near agreement between the equations (again, until you get to extremely small scales).
What about for gas in the accretion volume going faster than freefall? First of all, this material should not be bound so it wouldn't be accreted anyway. But assuming it would be, here is a plot now showing curves corresponding to ke(r)>GM/r.
Here, GM/R is the x-axis. Again, this estimate holds to good accuracy over extremely small distances from the sink particle (in terms of typical simulation resolution).
Meeting update
Rad Feedback
Expect to have a model up by the end of the week
Global Collapse Modes in CF model
http://mnras.oxfordjournals.org/content/414/3/2511.full.pdf - Vazquez et al 2011
Conferences
The two conferences I would like to attend this year are:
- "Star Formation 2016" - U of Exeter - End of August - website
Registration is now open, and there is a call for abstracts:
"We invite abstracts that span the full range of our field; over five days we will have sessions covering molecular clouds, protostellar cores, young stellar clusters, protostars, and planet-forming discs."
30th April 2016: Abstract submission deadline and end of early registration 17th June 2016: Registration closes 21st August 2016: Conference begins
The final programme for the conference will be announced in early June.
- "VIALACTEA 2016. The Milky Way as a Star Formation Engine: towards a predictive Galaxy-scale model of the Star Formation Life-Cycle" - End of September - Rome - website
Mars
Check out this sweet image of Curiosity in high res
Updated Ring Model
The model and equations are described here: https://astrobear.pas.rochester.edu/trac/wiki/u/erica/magneticring
meeting update
I am building a ticket on RadFeedback from sink particles as I go along with this development: https://astrobear.pas.rochester.edu/trac/ticket/442
Been in touch with Federrath and learned a bit more about how Disperse works, wrote this up here: http://astrobear.pas.rochester.edu/trac/wiki/u/erica/Amira
Am building a talk for my qual and am modifying the paper draft to be given as my qual report
Working on buildig feedback routines for radiation transfer — going to modify the bondiaccretion routine to do this, am close to understanding what the routines are doing there.
Colliding Flows draft
My edits of the paper draft are done as far as I can go right now. I stopped at the spectra section which is the weakest section of the paper and is not yet telling a coherent story. I fixed one of the spectra plots it looks better. Not sure where to go with the spectra section at this point, whether I should take the time to improve this section now or not. I still need to add a discussion section to the paper as well.
I'm almost inclined to take the spectra section out of the paper entirely.. Thoughts?
meeting update
Heavily updated the M2FR and the protocluster formation sections of the paper. Also cleaned up the intro. Am now working the rest of the way through the paper.
Meeting update
Summer school
Got to play a little with Gandalf SPH code and simulated the collapse of a singular isothermal sphere. Learned that python can process chombo hdf files, among lots of other cool astrophysics things.
Amira
Today I followed up with Christoph Federrath about the Amira vs. Disperse paper. I largely fleshed out the paper structure and figures and am excited to begin this project.
The goal of the paper will be to address the following 4 questions:
Do these different programs:
- Find the same # of filaments
- Find the same filaments
- Find "realistic" filaments
- Find "actual" filaments
For 3., Christoph's paper (link: http://www.mso.anu.edu.au/~chfeder/pubs/filaments/Federrath_filaments.pdf) talks about the differences in filament count between adding different physics to simulations. He finds that as you add more and more physics (turbulence, gravity, outflows), more and more filaments are formed in the simulations (no real surprise here). However, of greater interest is the fact that the average filament width is r=0.1 pc. He finds this across the board (irrespective of Disperse's sensitivity threshold and simulation resolution), so long as there is turbulence in the simulation. Since this matches observation (see Herschel findings), I called this "realistic" filaments. I think it would of interest if Amira also finds this result.
For 4., I described how we were throwing the idea around of creating a highly idealized simulation of 'definite' filaments and then feeding this through both Disperse and Amira. Do they both track all of the actual filaments we created? (I would imagine they would). Next we add some noise to the filaments progressively and watch if and when Disperse and Amira diverge in their findings.
I told him if he could send me the 2D CDM of one of his simulations with turbulence I could run it through Amira to get this started. I think we should think about how Disperse's 'persistence threshold' might be compared to parameters Amira uses and how to try to normalize the two programs the best we can..
Colliding Flows
Going to work on discussion and turbulent Jeans mass estimate these next couple of days
Meeting update
I'm back from getting married and having a honeymoon !
Next week I travel to Heidelberg for a week long summer school on ISM dynamics (bringing schedule to meeting, looks very good). I just booked flights and am preparing this trip.
I am also working on journal club happening in 2 days (got a fun paper to review) and reading up on literature from the last month until I get the draft back to pick up again
Paper draft in pdf form
Working on draft. Attached.
mass 2 flux ratio
Vazquez-Semadeni 2011 say for a uniform cylinder with a mean magnetic field
, the criticality condition (i.e. the critical mass to flux ratio) in terms of surface density and the field strength is given by:
where sigma is the surface mass density, i.e.
and L is the length of the cylinder.
This can be rearranged for the critical length of a cylinder of mean magnetic field strength B0 and number density n,
Given my initial field is
, and n = 1 cm-3, this critical length is:
This means that for the smaller box simulations, the colliding flows are sub-critical. However, for the larger box simulations they are super-critical.
Now, we do not see global gravitational collapse, so this measure of super-criticality doesn't seem to mean much. It doesn't take into account the kinetic energy in the flows, and the turbulent and thermal pressures that would also be counter-acting gravity. So then, since we do not see collapse even though the purely magneto-gravity critical condition is satisfied means there are other stronger forces in the cylinder preventing collapse.
If instead of putting the initial density in the above equation and we instead looked at the length scale of a cylinder with density = perturbation density, what do we get. First, let's take the average cloud density to be 100x the initial density. Then, we get,
This seems to say, if we assume the collision region is shocked to a mean density of n = 100 n0, and the strength of the field stayed the same as the initial value (which IS an assumption given the field also is shocked), then the length of an unstable collision region cylinder would be 1.5 pc. This is about 10x smaller then the collision region seems to be from eyeballing the cdm.
For the case of subcritical cylinders (i.e. smaller box simulations), how long does it take to acquire enough mass for the cylinder to become supercritical?
Can rearrange the above criticality equation to instead put in terms of the column number density, N:
Setting this equal to a function for N(t),
and solving for t gives:
where,
and
Solving for time gives,
This says the field is dynamically weak and SHOULD not be able to prevent global collapse by the time t = 2.6 Myr at the mass influx of the simulation. However, our simulation time is 30 Myr. So, something is supporting the gas against collapse.
Next, compare turbulent jeans timescales? If that is much longer than this timescale (and the simulation time), might be able to argue that the reason we're not seeing collapse is becuase of turbulent support?
meeting update
I've been working on my paper and am thinking of goals for my upcoming trip to MPI.
Intro 1st draft
The interstellar medium (ISM) is a dynamic and ever changing environment. It exists in various thermal states, appearing to cycle through them continuously - from a hot, sparse phase to a cold, dense phase and back again. The evolution of the gas through these phases paints a picture of star formation, one in which the molecular clouds from which stars form are not long lived steady state structures as once believed, but rather the transient consequence of various instabilities that arise in the ISM (cf. Heitsch et al 2005 for a quick discussion of the various instabilities).
The picture of transient molecular clouds (born out of the cold, dense phase of the ISM) is the foundation of today’s theory of star formation, and is supported by many well known lines of evidence, such as: the vast majority of (if not all) molecular clouds are in the process of forming stars, these stars are young (less than 5 Myr), and molecular clouds contain a wealth of hierarchical structure that would not last long due to star-star scattering or tidal interactions (Elmegreen 2000, Hartmann 2001, Ballesteros-Paredes and Hartmann 2007, and references therein). Indeed, this idea of short cloud lifetimes and its connection to fast star formation has its roots in work that now dates back at least many decades (Hunter 1979, Larson 1981, Hunter et al 1986). As computational methods have become increasingly more powerful, these theories have been tested in simulations designed to follow the highly dynamical evolution of molecular cloud formation.
One such model that has gained widespread recognition is the ‘colliding flows’ model. This model provides a mechanism for generating clouds with the above mentioned characteristics. Importantly, the collision between 2 supersonic streams of gas has been shown to: produce non-linear density structures with ease, the density of which can grow large enough, for long enough time to allow for H2 formation, after which the molecular gas undergoes localized gravitational collapse (i.e. numerical star/cluster formation), and disperses within roughly a dynamical time (Heitsch et al. 2006, and others).
While these models are highly idealized, they are not without motivation. Coherent large-scale streams of gas are plausible in many situations in the ISM: expanding bubbles driven outward from energetic OB associations and/or supernovae, turbulent motions, density waves in the spiral arms of galaxies, and cloud-cloud collisions have been proposed (Hartmann, Ballesteros, Bergin (2001), PPVI chapter 1, Inutsuka 2015). Further, observational evidence is beginning to provide support for such mechanisms - Looney 2006 has demonstrated cloud-cloud collisions, atomic inflows surrounding molecular gas has been observed in Taurus (Ballesteros-Paredes, Hartmann, & Vazquez-Semadeni 1999) and other molecular clouds (Brunt 2003), and molecular clouds have been found at the edges of supershells (Dawson 2011, 2013).
With the large body of data supporting large scale magnetic fields in the ISM (cite), the addition of magnetic fields to these models was necessary (cite sims). Such models have identified additional instabilities formed in the flows (cite), and an overall reduction of SF is found (cite). As a natural next step, models are beginning to study how oblique shocks at the collision interface affect cloud and star formation (cite).
As the flows themselves are not likely to be exactly parallel, the question of oblique shocks is an important one to address. Such oblique flows can arise in X. As flow enters the oblique shocks, a shear velocity field is formed giving rise to the question - how does added shear affect cloud and star formation? Recently In Ostriker et al. …. In the present paper we wish to explore this parameter regime, i.e. of a magnetized oblique colliding flows.
meeting update
I got an estimate for the ring last week but it seems to be off by a couple orders Will share at meeting tomorrow
Working on text trimming and intro writing as well.
Meeting update
Here is an outline of the ideas I am thinking for the ring analysis. The main idea is that the shocked material of the flows acquires some new density and velocity which translates into a ram pressure outward into the ambient medium. We can assume that when this material meets the ambient of equal pressure, the material will come to rest and that is where the ring will form. An issue is — we need a radial dependance of the pressure in the ambient (or from the splashing), so that we can find a radius where these 2 pressures balance.
Using just the shock conditions means we consider the splashed material having constant rho*v2. So what are the physical mechanisms that will provide a radial dependance on the pressure?
Here were some of my ideas (by the way, ignoring MHD for now):
-The splashed material slows down as it is being ejected from the collision region due to gravity — perhaps refer to Parker wind solutions (or something similar to Joe's analysis…, parker wind might not be the right term here for what I am thinking, but haven't looked into this yet… )
-The ambient medium is infalling toward the gravitational potential minima, i.e. the center of the box. This infall has some radial behavior —- and if I can back out the radial density and velocity profiles, will have a ram pressure to try and balance to the shock ram pressure
-Then more complicated stuff, like compression of the outgoing splashed material as it rams into the ambient gas, leading to cooling/loss of KE/and a concomitant decrease in velocity.
I focused on the 2nd case as it was the one I was most familiar. Although, it might not be entirely the right regime to do this analysis, depending on whether the ambient is fully jeans unstable. I thought it was a nice place to start, to jump into the equations, and the process of building the model.
Now, I am nearly done for the Hydro, nornmal shock situation, assuming no reflection. As you can see on this page, ways I can modify this model for the head on case:
- making the shock reflected (expect small deviations from the values)
- making the collapse not spherical but rather cylindrical (also don't expect much deviation)
- adding magnetic pressure to the mix… probably here would get the greatest - change in the values… i.e. my model might fail the most because it doesn't include magnetic pressures at this point.
Here is a page describing the shock conditions I am using:
https://astrobear.pas.rochester.edu/trac/wiki/u/erica/CFringanalysis
Here is a page describing the uniform collapse model I have been developing:
https://astrobear.pas.rochester.edu/trac/wiki/u/erica/UniformCollapse
Uniform collapse involved:
- understanding the solutions for uniform collapse again
- couldn't find this easily on the web, so dug into the solutions myself
- coded it up in mathematica to visualise the solutions
- got the model nearly where it needs to be to begin using numbers from my sims to check it out
As there was no great description of this that I could find, either in text books or the web, I thought it would be good to write up what is in my notebook on the wiki. Need to try and get this visible in the google universe, as a search for uniform collapse sphere, or homologous collapse sphere, isn't bringing it up on google yet. Will check quickly if the visibility can be increased.
As far as papers, I have been reading astroph everyday, reading the highlights in astro on popular science sites, and have been reading the star formation newsletter.
I applied to Astrohack week but didn't get in the program. I have also been working through the equations and there importance in the Fed. paper on outflows.
update ticket 150
Monopole expansion works in 2.5D
Here is a comparison of phi for the same 2.5D and 3D simulations of a clump at the origin:
The boundary conditions in the 3D case are reflect on the x,y,&z axes, and in the 2.5D are reflect-cylindrical on the r and z axis. For the other box sides, the only possible BC is multipole, which has monopole, dipole, and quadrapole terms possible in 2.5D now (but only monopole in 2D). Both runs use the same solver.
Here is a line out of phi:
You can see they are very close. The spreadsheet curve shows their differences, the highest being inside of the clump, closest near the boundary, and then approaching zero just outside of the clump.
For reference, here is a comparison of the 2D simulation with the 2.5D:
Here, the 2D simulation setup is exactly the same as the 2.5d case, except the boundary condition is plain reflecting, and the solver is structPCG instead of structGMRes (which gave a segfault..). We can see that the 2D self gravity did a worse job of reproducing the full 3D spherical behavior, as expected. Most notably here is the puffier appearance of phi. This is because of the lack of 1/r terms in the stencil in xy space compared to rz space. In rz space, phi is more sharply peaked.
Next steps
This simulation (clump at origin) only produces a monopole term in the multipole expansion, and so we would like to also check the behavior of higher orders terms using different mass distributions.
meeting update
Here is my literature list, Adam (still have to fill in the CF section, have read a lot of papers there, and need to grab them):
https://astrobear.pas.rochester.edu/trac/wiki/u/EricasLibrary
I will have some results up to share on my 2.5D gravity ticket for tomorrow's meeting (last week was working on that)
Here is a new git page I made: https://astrobear.pas.rochester.edu/trac/wiki/u/erica/GitRepos
Conferences:
IAUS 315: FROM INTERSTELLAR CLOUDS TO STAR-FORMING GALAXIES: UNIVERSAL PROCESSES? August 2015, Honolulu
Conditions and Impact of Star Formation Sept 2015, Switzerland
Dynamics of the Interstellar Medium and Star Formation Sept 2015, Heidelberg
Feedback in the Magellanic Clouds Oct., Maryland
Astrobiology and Planetary Atmospheres Sept 2015, Chile
Astro hack week Oct 2015, NYC
Star Formation 2016 August, 2016, Exeter
update
Doing reading for next project on outflows.
Did CIRC poster session.
Met with Alyssa Goodman.
Worked on my ticket for 2.5 self gravity
draft
attached
draft
draft
meeting update
3D Isotropic Turbulence
Here is a test of the problem module, 'Isotropic Turbulence' in 3D. The 2D version of this code ran fine (see my previous blog post here: erica02232015).
In 3D there might be a bug? Instead of isotropic sources, I am seeing columns of turbulent sources. A slice through the box in the xy plane, would resemble the 2D simulation.
Contour of rho, momentum, and pseudocolor of rho:
Some physical quantities for paper
Magnetic field strength
To get the magnetic field strength of the flows for the paper in gauss, I think I can do this,
Take the computational magnetic pressure and multiply by pscale to get magnetic pressure in cgs,
B(cgs)2/8pi = (B(cu)2/2)*pscale
Then multiply by 8pi and take the square-root to get B (cgs).
Now, I am pretty sure that the unit of gauss describes 'B', but wikipedia says the unit is for the flux density, which I suppose one might interpret as energy density aka magnetic pressure. If that were the case, one would not take the square root..
Considering just B, I get,
B = 1.3 microgauss
This seems reasonable, considering typical estimates for molecular clouds and ISM are ~ 10 microgauss..
Ram pressure
In my physics.data, xmu = 1.27, which seems to be the mean molecular weight in the code. Mean molecular weight is the mean mass of a particle in atomic mass units, or amu. One amu is technically 1/12 the mass of Carbon-12, and so is approximately the mass of 1 nucleon, or equivalently, one proton (m_H).
rho = n * xmu * amu
Xmu seems to be used to set the scales for velscale (which is a scaled sound speed). It seems then that this means the particles in the gas are effectively *mostly* hydrogen with a small fraction (25%?) being 2x the mass of hydrogen, ie Helium.
Pram = n*amu*xmu*v2
development report
Am working on compiling my dev branch for 2.5 self gravity, and thinking of a test problem (the former seems to be going smoothly, the latter will discuss in meeting today). Maybe a few days out from being done with this ticket. Next up on my plate might be 1D gravity.
I also suggest making sure the code can compile on all local machines in the next fix of the code.
Meeting update
Science Meeting
Projected streams
Here is another image showing projected streamlines. This time, we are looking at the final frame of the 0, 30, and 60 case from left to right. We can see from this how straight the interface became in the 60 case — it looks like the 0 case on the far left. (Thought: does the interface evolve faster in the 60 case than the 30 case — perhaps with enough time, the 30 case would also straighten out).
Projected streams vs sliced streams
Last time we saw this image of the projected streams,
This time, here is the same image but made of slices rather than projections (that is slices in density and streamlines down the middle of the box).
Coding meeting
Simulation Proposal
meeting update
Working on the paper, and see last week's meeting post:
Meeting update
Update e v. t plot
Last time, we were thinking this plot of total energy in the collision region over time looked a bit funny with the leveling off of the gravitational energy (yellow) in the collision region,
since the mass in the collision region was thought to be a strictly increasing function. Recall, was made by taking a section of the grid, a cylinder of radius 20 pc and length 20 pc, and then doing a weighted variable sum of the various quantities, see this page for reference.
Gravitational energy was calculated with the expression,
<gasphi*rho>,
which only accounts for the *gas* gravitational energy. This doesn't matter before 10 Myr, before which any sinks form. But after 10 Myr, it was curious how that curve just leveled off..
Here is a recent plot we made showing the particle mass compared to 1) the whole simulation box, and 2) the same collision region we used for the above Evt. plot. We see that the particles themselves add to about 10% of the total mass of the collision region. This then might imply that they aren't negligible, and should be added to the total gravitational energy expression. Further, it is interesting that the total mass of the collision region seens to level off (note this IS a log plot, making differences appear less than they are). This implies the mass flux in = mass flux out in this region.
Talk from CIRC symposium
If you go to slide 10, you will see side-on views of the 30 and 60 case. It was interesting to see them again like this.. It seems like the 60 case straightens out a lot more than the 30 case
Curious what the field is doing here, wanted to see the streamlines at the final time for the different runs. Here is a projection of the streamlines through the domain at 27 Myr,
and just another pretty shot of the mesh and the flow,
Isotropic Turbulent Sim
Got a version of this module up and running for 2D to generate filaments for Amira to analyze,
Looking at the V2 spectra, we see the characteristic turbulent decay,
Amira output -
x,y positions of segments along filaments:
Nodes, segments, points:
Comparing the filaments Amira finds of a certain threshold to a pseudocolor plot of density,
meeting update
Not much to report, working on the paper (will have a draft of the sections I am working on in a couple of days on the wiki), and going to put together a short talk for the CIRC symposium Friday
meeting update
Working on the results section of the paper to get out to folks.
Worked on some new streamline plots last week in collaboratory on the figures page.
Working on some mass/sink plots with Marissa
Meeting update
- Have been going through my runs and organizing the information I have on them into pages
- Have been writing a 2.5 self gravity [http://astrobear.pas.rochester.edu/trac/wiki/u/erica/CylindricalGravity page on the fixes for ticket 150
- Working on plots for Friday's meeting with Marissa,
Meeting Update
Last week I worked on fixing the code for the colliding flow figures, self gravity ticket, and documentation.
The newest papers figure page is here:
https://astrobear.pas.rochester.edu/trac/wiki/u/madams/CollidingFlowsFigures
Meeting update
- Handed new code off to Marissa for the column beta maps. This is taking projections of the field, 'inverse beta', rather than taking projections of pressure and magnetic energy individually https://astrobear.pas.rochester.edu/trac/wiki/u/erica/CF_BetaMaps
- Recalling some fourier stuff from last meeting before fixing up that code, https://astrobear.pas.rochester.edu/trac/wiki/u/erica/CF_spectra
Meeting - 12/15/14
- Spectra Code - made spectra code for each of the different runs for Marissa, located here: /scratch/ekamins2/MHD_Shear/Code. I'm making pages for each of the plots for future reference, located on my page. Here is a page on the spectra code: https://astrobear.pas.rochester.edu/trac/wiki/u/erica/CF_spectra
- Presentation - presented some plots on magnetic fields in cluster forming simulations at journal club last week. Think it helps to answer the question of the role of B in star formation. Am attaching this presentation to this blog post.
- Spectra Tests - tested some differences in the spectra code, see blog post: https://astrobear.pas.rochester.edu/trac/blog/erica12122014
- Fabian's Code - learned some basic IDL and took a look into Fabian's code for analyzing filaments.
- Contours - we looked at the contour plot last week, seems like something easily programmable into astrobear: https://astrobear.pas.rochester.edu/trac/blog/erica12092014
Testing new spectra params
Attaching 2 spectra here to see differences between:
-Cube spectral region, constant interpolation
-Rectangle spectral region, constant interpolation
-Cube spectral region, spectral interpolation
Gravitational Energy Spectra:
Kinetic Energy Spectra:
As can be seen from plots, no difference between choosing a cubic or rectangular region in the box over which to perform the spectral decomposition (curves lie on top of each other). There is a difference between the interpolation, though.. Spectral prolongation seems to somewhat smooth out the curves.. I will go ahead and make code for rectangle boxes and spectral prolongation for next week plots meeting.
Meeting Update
Fourier Spectra
Link to last week's post on this here
Paper figures progress
Here is where we are with paper figures:
I. Dynamics & Morphology
Col density | X |
Energy Spectra - B, KE, gravitational | X |
B v. n PDFs | X |
Beta maps | X |
II. Identifying Filaments & Characterizing their Dynamics
3D plot of filaments (at end of sim, across different runs), with and without streamlines | |
Pick out 3 or 4 best filaments then plot along filament angle between mean field and filament direction | |
Radial characteristics | |
Morphology identification, hub n spokes? Filaments and fibers? | |
Flow along vs. onto filaments | |
Projected streamlines plot | X |
III. Put in terms of Observer
Velocity spectra and/or velocity dispersion | X |
Density spectra | X |
Polarization maps | |
Characterizing projection effects? |
IV. Amira Code
Identify filaments by hand and their characteristics | |
Code that into Amira | X |
Make easy filament set and test with Amira | |
Then harder test case | |
Use to count different filaments in time |
Spectra for No-Shear Production Run
Both spectra are taken within a 40 pc cube at the center of the simulation domain, coinciding with the collision region. This is for the No Shear case, taken at t=0 (above) and t=10 Myr (below). Fields are given in the legend, from top to bottom: Total Energy (Field 105), Gravitational Energy, Kinetic Energy, Magnetic Energy, rho2, v2, v_div2.
Initial conditions:
Nothing too interesting here. We see all energies are largest on largest size scales (smallest wave numbers on x-axis) and drop off to 0 on the smallest size scales. I guess this size scale would correspond to the smallest cell size. There are no features on the largest scales, meaning that the power on those scales is pretty uniform over that range. As we move to smaller scales, the power falls off. This makes sense — smaller volume elements contain less energy than the bulk. However, at a certain scale we begin to see wiggles. This indicates we are getting an enhancement of energy on those scales. I guess this has to do with the rippled interface of the collision region.
The different forms of energy all look the same at t = 0, they all are composed of doublet-peaks. This seems to make sense, that they would be functionally similar at t=0. One thing that is different about them is the height of their peaks in the various spectra. The magnetic, gravitational, and density curves seem to share the same peak heights, but on different absolute scales. The Kinetic and Total energy curves also share the same peak heights, but now they lie on top of each other, and the same goes for v2 and v_div2.
t=10 Myr
Here is a page on this plot with the details on how I generated them:
https://astrobear.pas.rochester.edu/trac/wiki/u/erica/CF_Spectra
Meeting update
Have a spectra code for mass, velocity, and energy spectra. Am working now on limiting the region in the grid where the spectra is being computed. First am doing this for the no shear case, as this should be the most straight-forward.
Meeting update
B v. n plot-
The B v. n code is done - might just need to be tweaked for the best min/max combination for the magnetic pressure.
Here is a page on this plot:
https://astrobear.pas.rochester.edu/trac/wiki/u/erica/CF_bvn_plot
Marissa, you can find the problem module here:
/grassdata/erica/CollidingFlows/scrambler_3.0/modules/Problem/problem_Bvn_pdf.f90
You will need to copy it to BS and compile it, and run it with a reservation. I set up a reservation for you to start next week, Wednesday. There are *2* pdfs of B v. n — one weighted by volume and one by mass. We can see which one we like best. Each job shouldn't need more than 256 cores I would say. Start by making quick and dirty plots of these for each of the runs including the convergence tests - at frame 100 & 200 please (for the convergence tests at frame 50). With the reservation you should be able to have them all run in parallel easily.
Also Marissa, start early in week on script - I will be back on Wed to look at the B v n plots with you.
B10_60 case -
Ran out to about 35 Myr and got a single sink to form - you can find an updated movie down the barrel for this case on this page - https://astrobear.pas.rochester.edu/trac/wiki/u/erica/B10_S60#no1 under attachments, named 'test.mpeg'.
Self-gravity ticket -
Made some progress on my ticket — will post to it when I return
My outline of tasks for paper figures
Week of Nov 3-7th
-Get working code for B v.n maps for marissa
-Get projection of field lines code
-Get beta map code for Marissa
Week of Nov 10-14th
-Get working code for energy spectra for Marissa
-Get working code for velocity and density spectra
-Get code for velocity dispersion
-Email Amira folks for trial license
Week of Nov 17-21
-Get Fabian’s code working and code up different radial analysis we want
-Work on characterizing filaments mathematically for use with Amira
Week of Nov 24-28
-Work with Amira
Meeting update
Runs are ¾ done.
Just moving data to local machines. Should have more movies up soon.
Here is the B10-60 case:
https://astrobear.pas.rochester.edu/trac/wiki/u/erica/B10_S60
I am thinking we will want to do the bulk analysis on just the Beta = 10 runs, that is shear angle = 0, 15, 30, & 60
For the B1 run- maybe we can just have a small section in the paper showing morphology (column density maps)? This run is only out to t=20 Myr as of now. Don't think it is necessary to run out longer either.
For the resolution study also was just going to show column density map??
For the B v. n maps - thinking of plotting pdfs of the magnetic energy vs number density ? What units should this be in?
Lastly - thoughts on restarting B10 60 case for another 10 - 20 Myr?
Figures Paper - updated
Dynamics & Morphology
Col density | X |
Energy Spectra - B, KE, gravitational | X |
B v. n PDFs | X |
Beta maps | X |
Identifying Filaments & Characterizing their Dynamics
3D plot of filaments (at end of sim, across different runs), with and without streamlines | |
Pick out 3 or 4 best filaments then plot along filament angle between mean field and filament direction | |
Radial characteristics | |
Morphology identification, hub n spokes? Filaments and fibers? | |
Flow along vs. onto filaments |
Put in terms of Observer
Velocity spectra and/or velocity dispersion | X |
Density spectra | X |
Polarization maps | |
Characterizing projection effects? |
Amira Code
Identify filaments by hand and their characteristics | |
Code that into Amira | |
Make easy filament set and test with Amira | |
Then harder test case | |
Use to count different filaments in time |
Figures for Christina's Runs
Christina's:
Plot | Christina | Erica |
Total "Cloud" Mass (cloud = > 100 cm 3) over time | X | |
Total sink mass/(cloud mass + sink mass) over time | Will work on | |
Mass in cold gas (T < 300 K) | X | |
Density weighted histogram of ratio L-R | X | |
Histogram of (L-R)/(L+R) in cloud over time (color plot) | I'll work on | |
Histogram of (L-R)/(L+R) in sinks over time (scatter plot with color on top) | Work on | |
Standard deviation of histogram at end time, spread across shear | ||
% prograde vs retrograde sink particles | X | |
or % mass in sinks that is prograde vs retrograde | X | |
specific angular momentum density (scatter plot histogram to show spread | X | |
histogram of specific angular momentum density for whole cloud - does it correlate with shear? | X | |
Bulk magnetic, gravitational, thermal, kinetic over time (in box or clouds?) | X | |
Energy spectra | I'll work on | |
Energy in sinks over time | I'll work on | |
n-T histograms at final time | X | |
velocity dispersion | I can work on | |
column density maps at times when gas > 1021, 1022, 1023 | X | |
Meeting Update
Will be working on data analysis tools this week and job restarts- need to consider our resources
CDM and field plot for poster
Here is what projection of beta looks like (this is made by taking the projection of thermal pressure over B pressure - ½ B2),
Here is a projection inv beta,
I like working with beta better so in below plots that is what I am using.
The idea is - next to the CDM's on the poster, we will have a plot of the field. Here are some possibilities:
2 pseudos overlaid on top of each other, one that is N2 and 1 that is beta. The CDM of n2 is in grays underneath of the hot color plot of beta:
Beta with density contours on top:
I think I like the contour plot better — the overlaid psuedos are potentially misleading..
Plotting for MHD CF paper
Here is some comparisons of Jonathan's paper figures, and plots that I have:
Density vs Pressure PDFs
Questions,
How to choose the bounds?
Adding those lines in visit?
Density vs. velocity
The plot Jonathan used in paper was velocity dispersion over time, density weighted. The plot I have now is volume weighted density vs. velocity PDF. It doesn't show the information as cleanly as the line curve does..
Weighted mixing ratios
The plot Jonathan used in paper was mass-weighted mixing ratio, what I have is volume-weighted mixing weighted.
So here is a collection of plots:
Column Density | have |
Density-weighted joint probability distribution function for density vs. pressure | have |
Kinetic and gravitational energy spectra | — |
Mean kinetic and gravitational energy densities E¯k,g against time, split between large (> D, see eq. 2) and small scales | — |
Mass history against time | have |
Density-weighted velocity dispersion against time | — |
Core mass distribution | have |
Mixing bias (eq. 8) vs distance of cores to mid-plane, at time t | have |
Mass-weighted mixing ratio (eq. 7) for the gas in the analysis region | — |
Mixing ratio of cores at time t | have |
Update
Will work on Fourier stuff this week.
Working on submitting abstract for poster, here is a rough outline:
Large scale colliding flows are an interesting model for molecular cloud formation. The model produces clouds that are turbulent and filamentary in nature, produce stars across the whole cloud interface, (anything else? lifetimes that match??). Previously, we have looked at hydro, looking at energy budgets, sink etc. using the astrobear AMR fluid code. Now we have extended this further, adding the effects of magnetic field and shear. In addition to performing analyses as before, we plan on extending it further by studying the filaments formed in our model. How large are they? How many stars do they form and what is their mass distribution? What is the gravitational stability of the filaments? (Take for example points brought up in the observational papers that they studies in Sco Cen). Given the large scale nature of our simulations (quote the resolution), we form dozens of filaments, and are excited to see the distribution of the filament properties over the different runs. (Cite the runs?)
Meeting update
Development
Working on 2.5d and 1d spherical gravity mods to the code.
Here is a page on this I am putting together as I go:
https://astrobear.pas.rochester.edu/trac/wiki/u/erica/CylindricalGravity
Using Leveque's Finite Difference book (http://volunteerlecturerprogram.com/wp/wp-content/uploads/2013/01/lecnotes.pdf) to refresh my memory on the matrix form of the equations
Christina's runs
Have access to her school's machines now and ostensibly the hydro data set
One of the MHD runs is almost complete now
Beta1, 60 - 166/200
Beta10, 15 - 172/200
Beta10, 30 - 179/200
Beta10, 60 - 188/200
Meeting update
Last week,
- Horizons program with kids — we went over our cosmic address and the cosmic calendar (with Neil Tyson's Cosmos), and had a lab demo with Craig.
- Working on my ticket for restarts
- Ran the sims out for 4 days on Bluestreak reservation, Marissa is set to start them up again on a 7 day reservation there starting Thursday.
- Read Hennebelle's 'CF successes and limits' - attached to this blog post
- Working with UNC to get access to their machines
Meeting update
Machines are almost full.
Here is an updated page of CF run stats:
http://astrobear.pas.rochester.edu/trac/wiki/u/erica/CFRunStatsHighRes
Meeting update
High-Res MHD Shear
- Runs are coming along - Beta10, Shear15 (95/200) Shear30 (92/200) Shear60 (100/200) and Beta1, Shear60 (75/200)
- Keeping track of the restarts on this CF page (updated),-http://astrobear.pas.rochester.edu/trac/wiki/u/erica/CFRunStatsHighRes
- Going to need a solution for storing this data. I am estimating needing an extra TB locally, this will come right to the edge of available storage on our local machines as of now. So maybe folks need to delete old stuff, or we should decide on how many of the frames to keep (only 1/10 frames or so..?)
- Only saw nans in one of the simulations (B10, S60), but they were resolved, only saw restarts in 2/4 of the sims, and only 1-2 times each; the sims seem to be progressing fine
- Restarting with a less number of cores resolved memory issue — so guess on 4096 cores I hit a limit with memory per core. The sim can run fine on 2048 cores
- Am post-processing data now. Getting the different pdfs, histograms, and column density maps Jonathan got for his CF paper, and will have Marissa work on fly-throughs
- Going to do a resolution-convergence test for one of the runs, run out to frame 50 with 1 level more and 1 level less for the paper
High-res Hydro runs
- Asked Christina for location of hydro sims. She said she would work on getting me an account on her local machines. Will ping her again this week.
- She is teaching for the summer. I am wondering if I should offer to do the analysis for the hydro runs as well? Try to get 2 papers out on the CF, MHD and Hydro? Just thinking it would be nice to have a hydro version to compare to for the MHD effects..
- When should plan a visit to NC?
Allocations
I called NICS the other day as I was having trouble logging into the Kraken replacement machine. They said that our allocation time had expired just on that day, leaving >½ a million cpu hours. He said starting July 1st we have allocations starting on San Diego & Tack. Said I might need to be added to these grants.
Outreach
The horizons program for innercity youths is starting next Tuesday. This is an 8 session summer workshop series that we are participating in to teach kids about astronomy. Marissa, Jonathan, and I are going to lead workshops for the kids. We each can come up with our own lesson plans, but please let me know what you plan to do with them by Monday.
Bonnor Ebert Paper
The forms have been received and the paper is awaiting publishing
Meeting update
Talked with Jonathan on tasks and threading, will need to adjust the slurm script on Bluestreak to use threading
Made tickets for documentation, assigned each group member a ticket to begin working on — what are our names on the wiki?
Changed due dates for development tickets
Read a lot of colliding flows papers last week, am putting together an email for Fabian to meet next week.
Meeting update - 5/19/14
BE paper
- Got all the proofs back to ApJ
- Sent you the form for funding the paper. Not sure if this is needed before they publish?
MHD Shear Flows
- Need to decide on field orientation for the 'perpendicular' case. The Beta=1 effects for a uniform field in y or z are too strong (see below). Talking with Jonathan on this, it seems a highly unrealistic initial field orientation to begin with — how do you have this large scale perpendicular field and a tilted interface to begin with? If there was some large scale perpendicular field, then by the time the flows collided, they would have no tilted interface. A more realistic perp. case might be a helical field inside of the colliding flows only (no ambient field)? This would simulate a perpendicular field, but without concerning the broken symmetry played by the tilted interface. For now though, am running 3D low res sims of the y and z fields with a weaker field.
- The data set for these runs as they are now, is extremely large. I am at ~ 120 GB now, ¼ of the way through the production Shear15, Beta=1, x-field run. This is for ~ 15003 cell resolution (48 + 5 levels). I estimate each run will be ~ 1TB in size. Where to store all this data?
- Stuff still need to finalize for production runs: 1) Potentially increasing the box size to prevent back-flow into the outflow-boundary. This most likely will be a problem in the Shear 30 and 60 cases (see my page). Can do this either by adding code to enable a restart with a bigger box once the back-flow hits the wall, or just run sim with a longer box in x to start. 2) Decide on parameters for the flow, i.e. the mach, ramp down times, etc. for production.
- Runs possible: Beta 1/10, 3 shear angles, 2 field orientations = 12 runs..
High Res, Shear 15, Bx, Beta = 1
- ¼ way through on BS — no nans, but did see source errors
- Need to restart with different threading option, seems like the messaging memory got too large (see ticket: http://astrobear.pas.rochester.edu/trac/ticket/396)
Low Res, Different Field Strengths/Orientations
Here is my wiki page on this: http://astrobear.pas.rochester.edu/trac/wiki/u/erica/LowResMHDShearFlows
And last week's blog post talked about what I was seeing in the Beta=1 case: http://astrobear.pas.rochester.edu/trac/blog/erica05122014
Hydro
Bx
Beta10, Beta1
By
Beta 10, Beta1
Bz
Beta 10, Beta 1
Column Densities
Beta 10, Beta 1
Meeting update
MHD Shear Flows
- Need to decide on field orientation for the 'perpendicular' case. The Beta=1 effects for a uniform field in y or z are too strong (see below). Talking with Jonathan on this, it seems a highly unrealistic initial field orientation to begin with — how do you have this large scale perpendicular field and a tilted interface to begin with? If there was some large scale perpendicular field, then by the time the flows collided, they would have no tilted interface. A more realistic perp. case might be a helical field inside of the colliding flows only (no ambient field)? This would simulate a perpendicular field, but without concerning the broken symmetry played by the tilted interface. For now though, am running 3D low res sims of the y and z fields with a weaker field.
- The data set for these runs as they are now, is extremely large. I am at ~ 120 GB now, ¼ of the way through the production Shear15, Beta=1, x-field run. This is for ~ 15003 cell resolution (48 + 5 levels). I estimate each run will be ~ 1TB in size. Where to store all this data?
- Stuff still need to finalize for production runs: 1) Potentially increasing the box size to prevent back-flow into the outflow-boundary. This most likely will be a problem in the Shear 30 and 60 cases (see my page). Can do this either by adding code to enable a restart with a bigger box once the back-flow hits the wall, or just run sim with a longer box in x to start. 2) Decide on parameters for the flow, i.e. the mach, ramp down times, etc. for production.
3D High Res Colliding Flows
I ran a Beta = 1, Shear = 15, field = parallel case on BS, 8000 cores. It got ~¼ of the way through. It ran into some bad cfl numbers (at first, finite, then cfl=infinity) and src errors, requested restarts, and then made a few more frames until it died with the error:
"hyperbolic/sweep/stencil_control.cpp.f90, line 353: 1525-108 Error encountered while attempting to allocate a data object. The program will stop."
I'm attaching log files from machine to this blog post.
The global info allocations are at 300 GB at frame 57. The Info allocations didn't reach that size until frame 160 in the Hydro run on BS.
I'm curious how the src errors are mitigated
I also am curious about this - I saw that on BH (~64 cores), the info allocations in the standard out was 10x larger than the size of the chombo. This we talked about before as being related to not counting ghost zones. When I moved to BS, the difference was a factor of 100. Is this likely ghost zones again, this time the larger factor being due to the many more smaller patches being distributed over 8,000 cores?
Here are density and mesh images over time:
Everything looks good..
3D Low Res Perpendicular Field Cases
I ran both the y-field and z-field cases. This is a uniform field that runs through the ambient object and the CF object, perpendicular to the CFs. Here is a movie of the y-field case. This is density with the B vector field. Note the flow is colliding in x. Beta = 1.
We see that the oncoming flows cause the field to begin bending strongly, especially near the x faces of the box. The incoming flow is then directed along these field lines, squashing the material into a pancake in the xz plane. The field is pushed together by the flow, and piles up strongly in the collision region, where the magnetic pressure becomes so strong it vacates/resists gas from collecting there. Hence, the vacated cavity in the collision region, traced out by billowing shell of material traveling in y, away from the colliding flows.
The field is so strong in this case, that no interesting features of the flow are coming through. The NTSI for example is completely wiped out.
I also made column density maps of this case and the 3D low res z-field case and posted them on my page here.
I am running these same simulations now with Beta = 10 and with extrapolating/outflow only boundary conditions on all faces of the box, since the problem with the Bfields we saw before only happens when the Bfields are normal to the outflow boundary. The cases above had reflecting, Bnormal boundaries where the field was coming from, but in trying to avoid the bounce off of the boundary, changing BC's now too. Directories:
/bamboodata/erica/CollidingFlows/CollidingFlows/MHD_3D/Beta10/Shear15/
They are ~¼ way through on bamboo and grass and take ~ 20 hours to complete.
Meeting update
"In 3D, the vector potential should be averaged along each edge of the component parallel to that edge. For example Ax should be averaged along each edge that is parallel to the x-axis. The 2nd order errors due to estimating the average value along an edge by the midpoint will not produce divergence in the resulting B-field." For what field geometry can one just set the B fields directly?
Where does maintainauxarrays get set? There is a bit on it in amr/amr_control.cpp.f90:
But it seems can set this just in the objects..
Meeting update
Would like to discuss this paper:
http://www.ita.uni-heidelberg.de/~chfeder/pubs/outflow_model/Federrath_outflow_model.pdf
Meeting upate
3D low res MHD Shear Flows page, updated with movies:
http://astrobear.pas.rochester.edu/trac/wiki/u/erica/LowResMHDShearFlows
Christina's high res runs finished
Meeting update
The shear hydro runs are continuously getting larger and slower. They are at frames 175/200, 165/200, and 158/200. I am finding myself constantly running out of storage memory on the cluster and locally, so am trying to clear out space as I go. I will move them to Kraken to finish them up and use up some of our cpu hours there.
For MHD: I have finished the Shear 15, Shear 30, Shear 60 runs at 40 + 2 levels with beta = 1, with uniform field in the x-direction (parallel to the flow). Will likely run a beta=10 case of these as well, and then run the perpendicular cases. These take about 1 day to run in Bluehive with 32 cores, I could of course try for more processors on Bluestreak or Kraken to get all the low res runs done/tested before moving to high res… Maybe I will do that.
Working on finishing up BE paper edits this week.
Meeting update -- Erica
Colliding Flows:
Hypre library none-global-partition worked in fixing memory leaky thing
Sims are progressing along nicely, see page for update, standard out predicting another 8 days at current framerate
BE-paper:
Referee report back. Good comments. Will be working through them and resubmitting this week.
Questions:
- Keto reference?
In-group Education:
I made a page on scaling for Bluestreak
Documentation:
Mini-group report
Outreach:
Horizon's program over the summer — teach underprivileged kids about computational astro. Possible to have them using astrobear???
Mini-group page
Meeting update - Erica, 2/17/14
My main focus this week? Super-computing.
Page:
http://astrobear.pas.rochester.edu/trac/astrobear/wiki/u/erica/CFRunStats
Mesh/patches on different numbers of processors
Images of the mesh with patches overlaid for comparison between bluehive (64 procs) and bluestreak (512 procs).
If each patch only contained cells of a given AMR level, then it is easy to see why more procs → more patches → smaller region of highest level refinement (as in bluestreak case) — that is because once a physical region in the grid is marked for refinement (say to highest level-5), then each patch that overlays that flagged region will be forced to contain level 5 cells. The more processors used to do this, the higher the 'resolution' of that level 5 region would be.
The legend on these plots show that patches can contain different levels, but it not clear from these plots if these subsets of the patches (the different levels) must be contiguous.
Would be nicer to see this image colored only to patch number, rather than level + patch.
By intuition, it seems that the physical spaces and hence regions that are flagged for refinement to level 5 should have a 1-1 correspondence between machines. If this isn't the case, as these images suggest, then is the difference attributed to compiler differences or different no. of processors?
Bluehive;
Bluestreak;
Meeting update - Erica, 2/10/14
Group related stuff:
I make a motion to move the group meeting to another day, one less busy with meetings. If this motion is passed, I volunteer to help with organizing another time that works for everyone. The reason I am proposing this switch is that Monday is quite stressful/hectic with all of the meetings - 11:30-12 grad students meet with speaker, 12-1 Pizza Lunch, 130-3 group meeting, 3:30-5 lecture. I personally find it not-optimal to prepare for our Monday meeting under these circumstances. And on another personal note, I think Friday meeting would be best - to wrap up the week with updates, and there are no meetings/classes on this day.
Science Stuff:
Colliding Flows
Am attempting to optimize the wall-time of the runs on the different local clusters. Currently, the 2 different runs I have made decent progress on are the 15 and 30 shear angles cases. These have been restarted and moved around from bluehive to bluestreak to find a sweet spot. Here are some findings.
In 1 day on bluestreak (512 processors), the restart produced 7 new frames.
The same number of frames were produced on bluehive (64 cores, standard queue) but over 3 days.
The restart frame is 32.
Comparing frame 33 on bluehive and bluestreak:
Mesh on Bluehive:
Mesh on Bluestreak:
Rho on BH:
Rho on BS:
Computing
Worked on getting baseline estimate of astrobear -
http://astrobear.pas.rochester.edu/trac/astrobear/wiki/u/erica/memoryBaseline
Found out there are only 96 cores available per user at any given time on bluehive, only 24 hrs allowed on bluestreak and max is 48 hrs with reservation. I asked CIRC about bluestreak, and got this email, thought relevant to share with group:
Erica,
The queue time limits on BlueStreak are to ensure that everyone gets a share of the machine and a little bit to force people to check point their code. Month long run times run the risk of something happening before the job finishes. If the job is not check pointed and restartable, all the run time is lost if the job and or the node crashes. Also, if the job is in an infinite loop the whole month's worth of run time would be lost. So the time limits are short to ensure higher turn over of jobs and good programming practices.
The scheduler is configured to give all users a fair share of time on the BG/Q. So people that have only occasional jobs don't get shut out by people that continually shove lots of jobs into the queue. The share is based on the number of nodes multiplied by the run time of jobs over the past 2 weeks. The fair share and wait time in the queue are the primary factors in setting priorities in the scheduler.
Your jobs have been an exception that I am still trying to figure out. They should be starting in less than a day but they have been taking days to get started. We recently updated to a new version of the scheduler so I would be interested to see if this fixes the problems with the 256 node jobs.
All of this can be overriden with a reservation but there are two problems with that. One is that the smallest reservation we can make is half the machine. So your code would need to scale to 512 nodes or maybe run as multiple jobs in parallel to make use of all the nodes.
The bigger problem is how to justify dedicating half the machine to a single person for a month or two. This is a shared resource for the UR and URMC research communities and we try to be fair in allocating time to all the users.
I hope this explains some of our reasoning behind our policies with regard to the BlueStreak. Feel free to contact us if you still have questions.
Carl
Documentation Stuff:
When do you all want to get together and talk about documentation?
Meeting update - Erica
- Working on understanding memory usage on the big machines, and putting together a proposal for job resources to share with Jonathan (W?)
http://astrobear.pas.rochester.edu/trac/astrobear/wiki/u/erica/memoryBaseline
- Got added to Kraken account, will consider benefits of moving sims there
- Found some interesting papers last week, http://arxiv.org/abs/1401.7648, http://iopscience.iop.org/0004-637X/779/2/121/, http://arxiv.org/abs/1401.6096
Specifically for the last paper, -High resolution sims (~25-45 AU dx) -Radiation transport, MHD, self-gravity, outflow feedback, self-consistent turbulent initial conditions -Sub grid model (Cunningham 2011) for outflow feedback: assigns some fraction of the flow to be ejected from the sink along the sink's angular momentum axis, at some fraction of the keplerian speed. This uses the mass of the sink and a radius of the sink — although not certain the radius they are taking for the sink..
Annual Research Review Meeting, 1/29/14
This blog post has been moved to a wiki page
Meeting update - Erica
Colliding Flows
Adjusted Christina's set-up (15-degree shear angle) slightly and sent to Bluehive where I was able to run it on 64 cores to about 30/200 frames. The amount of memory for the simulation is growing with the filling fractions, and the time it is going to take to complete there is > 1 month. I asked her to check the output and verify it looks good on Friday. Will ping her again today. Am ultimately going to have to move this to a bigger machine to cut the time it will take to finish significantly, but am currently running into some issues in compiling on bluestreak, will sit down with Baowei to work on it.
Also, as I have seen in the past, the memory being used by info arrays in the simulation is different than the actual memory usage on the nodes. I am seeing a difference of again ~30 GB over the 8 nodes I am using. When I ran a baseline test for the memory usage of the code (by running very small sim. with no multiphysics), I was getting the base usage as about 1.6 GB per node (or 200 mb per processor >> 17 mb astrobear is on disk), which should account for ~12GB over the nodes. Not sure about the additional memory here being used..
Paper submitted to ApJ and Astroph
Still playing with optimizing refinement
Can do geometric refinement, but when use with refinement object, get wonky behavior. Need to look into this a bit more.
Some documentation updates to the wiki
Updated standard out page
The job sizes page needs a bit more work still, Question - how many aux variables in 2.5 D and 2.5 D with angular momentum?
Also want to update the debugging page to include stack memory issues (definition of stack and the quiet seg fault you see with this type of limit, and also how fortran can run out of stack fast). Is default stack 10 mb on most clusters?, and this is per node right?
Meeting update
- I looked at some 2D shear effects with the CF's (previous post)
- Working on getting the BE paper to adhere to all of the ApJ conventions
- Thinking about the computational difficulty I might run into with this shear sim in 3D. In thinking about the evolution of the mesh/filling fractions, I just ran this out in 2D for ~1.4 Myr. Looks like the mesh is going to grow a lot along the flow axis, and we see some grids forming in the splashed regions, that Christina says is not important to resolve. I think I should come up with refinement criteria that will prevent this from happening..
Hydro Colliding Flows with shear - Tests with fixed grid
Checking staircase effect
Running the 2D shear flow, fixed grid, shows the stair casing effect is not a function of the AMR grid placement. The effect seemed most prominent in the 30-degree shear angle case from my previous post, so am using that for comparison here. Fixed grid, 5122 is on bottom. Movie attached.
I also checked whether the AMR speed-up factor predicted by astrobear.log for the non-shear and 45-degree shear cases were valid. These were run on bamboo, 8 processors, machine not busy.
Checking AMR speed-up factor
Non-shear case
To make the table, I took the data from astrobear.log at the time-step right before the last frame of the simulation.
Simulation | Filling fractions | Info allocations | Efficiency | Speed-up | Actual Speed-up |
---|---|---|---|---|---|
642 + 3 | 0.135 0.614 0.762 | 18.3 mb 3.6 mb | 54% | 3.3 | ~5 |
5122 | na | 41.4 mb 5.2 mb | 82% | 5 | na |
45-shear case
Simulation | Filling fractions | Info allocations | Efficiency | Speed-up | Actual Speed-up |
---|---|---|---|---|---|
642 + 3 | 0.207 0.643 0.789 | 25 mb 5.4 mb | 44% | 1.6 | ~4 |
5122 | na | 41.4 mb 5.2 mb | 82% | 5.3 | na |
- Between the simulations of varying shear, I notice an increase in the filling fraction on level 0. This is because of the increase in the collision region. So, shear runs take longer.
- In both cases here, the amr speed-up factor as reported in the .log file for the AMR cases is an UNDER-estimate. AMR speeds up the simulation by ~5 times, even though in the high shear run it estimated the speed-up would be closer to 1.5.
- Current refinement criteria has the collision region nicely refined, but the splashing effects are not captured at high resolution. We see some features in these areas being lost by the refinement when compared to fixed grid. Is this something we would like to improve?
Resolution
Here is the fixed grid, 5122 45-degree resolution, and the mesh for the AMR (64+3) for comparison,
A possible concern here might be some loss of detail in the shear flows being splashed away from collision region with current refinement criteria (arrows in left plot). This might be a region that would otherwise form sink particles, but since the highest level of mesh isn't there, no sinks can be generated in these regions.
To get an idea of how large the flow region is, and the velocity field, here is a movie of vx:
- We see entrenched gas with high, positive speed in the x-direction in the splashing region. This likely would encounter similar gas from the other direction, which would collide and form dense regions of possible core-formation. This seems to indicate we might want to increase resolution into these regions. Other than that, should I try to reduce refinement in the collision region and check for loss of features in an attempt to speed of computation time?
- There is an apparent asymmetry in the high density forming regions, at least at early times, in the lower half of the collision region. This is curious.
- There appears to be some improper nesting occurring due to the tilted cross-section. This might be problematic should there be sharp gradients across this boundary, possibly resulting in nans.
Cone-shape
Looking at these simulations again today, I am curious about this cone-shape feature that appears at the splashing region of the collision area. I am interested in why this shape is so smooth, and remains so as it expands over time? By first impression, it looks too smooth. Would expect some turbulent features to appear along the boundary as it is pushed into the surrounding medium. Am left thinking it must have to do with the thermodynamics/EOS, which in this case is ideal with gamma = 5/3 + cooling.
- The clump-crushing time came to mind in thinking about this; the crushing time might be > the time of this sim. Further, the density of the ambient is lower than the material in the cone. It just doesn't seem to provide enough drag on the incoming material to disrupt the flow..
Meeting update
- Revised BE paper
- Played with 2D Shear flows. Getting an error when trying to post a movie to this ticket:
Error
The server encountered an internal error or misconfiguration and was unable to complete your request.
Please contact the server administrator, webmaster@localhost and inform them of the time the error occurred, and anything you might have done that may have caused the error.
More information about this error may be available in the server error log.
Apache/2.2.22 (Ubuntu) Server at astrobear.pas.rochester.edu Port 80
Finding the no shear runs to run twice as fast, producing 2x the frames in the same amount of time. The AMR speed up is 2x bigger in the no shear run (~2-3) than in the shear (~1-1.5). The efficiency is 50% in no shear, and 30% in shear. The filling fractions 0.148 0.617 0.670 in no shear, and 0.155 0.641 0.742 in the 15deg shear, at the same frame. So I guess the filling fraction maybe the culprit here.
Movie is here, http://www.pas.rochester.edu/~erica/CompareShearAngles.gif
- Waiting to hear back from Christina for CF parameters
Meeting update
- wiki page updates - updated standard out page, and the job sizes page, although the job sizes page needs a bit more work. Question - how many aux variables in 2.5 D and 2.5 D with angular momentum?
- also want to update the debugging page to include stack memory issues (definition of stack and the quiet seg fault you see with this type of limit, and also how fortran can run out of stack fast). Is default stack 10 mb on most clusters?, and this is per node right? Want to also write an explanation of the size of astrobear alone, and how you can do a baseline test to see what you are actually running. Also want to document that the size of info allocations in the standard out only includes the actual info structure, not any other large arrays such a post-processing might use. also want to update the ticket for CF problem.
- I read a ton this past week and need to update my library with papers and such
- I am working on making wiki page for CFs meeting tomorrow
- I emailed christina about using my code, no word back from her yet, emailed her again today to check up on her
- Should I register for hotel accommodations for Santa Barbara conference
- Simulation died with a seg-fault when running 5 levels, but not 4 levels. The TI instability seems to be resolution dependant,…?
- I am seeing more memory usage than expected on some simulations, going to make a note of on a ticket and come back to later
- A good reference for MHD numerical methods?
Sims updates and memory finding on BH.log
My 643 + 5 was running okay, until I got some nans, then the memory exceeded that requested, and bluehive.log gave me an odd error log (check attached). run:/scratch/ekamins2/CollidingFlows/TEST_2013NOV21/TestStackScript/5levels_nans I can restart from the frame before the NaNs, and possibly use a debug version of the code to try for helpful messages in debugging this?
I am also re-running with 4 levels to see how that run does. run:/scratch/ekamins2/CollidingFlows/TEST_2013NOV21/TestStackScript/4levels
At least with 2 levels (completed in ~10 hrs), I do not think I saw these NaNs arising.. run:/scratch/ekamins2/CollidingFlows/TEST_2013NOV21/TestStackScript and files:/grassdata/erica/from_bluehive/CollidingFlows/3DMHDUnlStack
Also found that the memory used as reported in Bluehive.log ~1-2 GB higher than standard out is reporting for some low res test jobs, and that this memory report is different depending on whether I run on 1 or 2 nodes. Perhaps this mismatch between standard out and bluehive.log is due to some overhead in the inter-node, inter-core communication? run:/scratch/ekamins2/CollidingFlows/TEST_2013NOV21/TestBluehivelog
For the high res job (643 + 5), the mem used was reported as 80 GB > 40 GB total I requested, and also > 50 GB standard out is reporting I used. This indicates that perhaps there was an overhead of about 30 GBs… ? That is, maybe the simulation was using 50 GB, which already exceeded the memory request, but with the overhead, it was actually using 80 GB… and finally was killed by the queuing system
This makes it slightly unclear as to how much memory we should be requesting on the cluster if the memory standard out reports /= memory used by cluster..
Meeting update
- Making the stack unlimited fixed the seg faults for the CF runs! !
- Have complete now a 3D, 643 + 2 levels run
- Am running a 643 + 5 levels run now on AFRANK queue
- Completed latest draft for BE paper
Meeting update
I am trying to debug a seg-fault on BH that is occuring when I run a 3D, AMR simulation of the MHD colliding flows module in the debug queue,
http://astrobear.pas.rochester.edu/trac/astrobear/ticket/326
I had a bit of trouble running with Valgrind, but figured out why (will add to the wiki page on valgrind). First, needed to load the valgrind module on bluehive, and 2nd, needed to modify my .pbs script from this call,
mpirun -np $NUMNODES valgrind -v --leak-check=full --leak-resolution=high --num-callers=40 --read-var-info=yes --track-origins=yes --log-file="valgrind_dump.txt" -machinefile nodefile.$PBS_JOBID ./$PROG > astrobear.log
to this call,
mpirun -np $NUMNODES valgrind -v --leak-check=full --leak-resolution=high --num-callers=40 --read-var-info=yes --track-origins=yes --log-file="valgrind_dump.txt" ./$PROG
Note, I removed the additional log files (machine log and astrobear log). I would like to have those printed as well, and hope by fidgeting with the command can get it to do so. Now, though, I am running with Valgrind, and will add the output to the ticket shortly.
Another problem I ran into when trying to debug the seg fault was that I was unable to get error logs when running with debug flags. Christina was also not getting traceback information. Jonathan said he suspected this might be due to the stack limit on Bluehive, and that I might
-create a launch.s script in your run directory that looks like this:
#!/bin/bash ulimit -s unlimited ./astrobear
-then make it executable by running
chmod u+x launch.s
-then modify your pbs script to run launch.s instead of astrobear - ie,
mpirun -n 64 launch.s
I am also trying this. Will update by meeting hopefully.
I am in communication with a CIRC tech, and am seeing if he can help me fix a bug in bluehive.log. When you run a sim on 1 node, there are 2 helpful variables that are reported: 1) total memory requested, and 2) memory used. However, when you run on multiple nodes, the 2nd value is reported as N/A. It would be nice to have both of these values however as an additional measure of memory usage. I also learned that you can only request 2 nodes in the debug queue, even though the CIRC wiki states you can request up to 16 processors in the queue. This is interesting given there are more than 2 nodes available for the debug queue.
Meeting update - Nov. 18, 2013
Last week was a short week for me — I left town late Thursday for the weekend to celebrate my grandmother's 70th birthday. I spent the time I had learning about estimating memory usage for simulations (following this page here- http://astrobear.pas.rochester.edu/trac/astrobear/wiki/Tutorials/JobSizes). It was basically straightforward in retrospect, but got a little confused on the wording/explanation on the wiki page for it — so spent a bit of time learning about that. I plan on editing that page to make it clearer. After many emails back and forth, I think this is how it goes - ,
- The amount of allocated variables for MHD vs. Hydro simulations are different than I expected. For hydro simulations, you only count the fluid variables you need for the simulation (e.g. in 2D this would be rho, px, py, E - 4 variables). For MHD , there are *always* 8 variables in the q-array (cell-centered quantities), no matter the dimension of the problem - (rho, px, py, pz, E, Bx, By, Bz). In addition to these 8 variables, you also count the variables in the aux arrays - these include the EMF's on cell edges, and the B fields at cell faces. There are different arrays for EMF parents/children and the accumulated EMF over the steps. Together in 2D these give another 5 'auxillary' variables - Bx, By, Ez, Ez_child, Ez_parent. In 3D there are 12 - Bx, By, Bz, Ex, Ey, Ez, Ex_child, Ey_child, Ez_child, Ex_parent, Ey_parent, Ez_parent. What about 1D? And should we put that on the wiki page too?
- GB is a misnomer for RAM, where the unit is actually the GiB in *most* usages of the word. 1 GiB = 10243 bytes. So, when I was doing my memory calculations using the GB = 1e9 bytes, I was off from the values quoted on the wiki page. I didn't know RAM is actually in units of GiB, even though the CIRC pages quote them in GB. This should be explained on the wiki page for memory estimates as well.
Also just a tip — we need 9 credit hours to be full-time graduate students now — not the usual 12, according to the department website and verified by the graduate student coordinator.
Journal Club
Here is the paper I mentioned was so cool yesterday —
Meeting update
This past week I ran 2D simulations of the colliding flows (Hydro, previous posts), began working through understanding how the module adds clumps to the flow, and am working through the literature on colliding flows. I found a very interesting paper on this problem in 2D, will post in my library. Tried moving to Bluehive for the runs, but am having a bit of trouble there. Will check into this later today.
Comparison Runs
I spent today playing with visualizing these runs in visit, and adjusting some of the parameters in the data files. Here is a plot summarizing some of my findings -
Focusing on the left column of the plot, we see the field restricts the lateral dispersal of the collision region away from the colliding streams. This seems to have 3 important effects. 1) In the hydro run, the clumps fragment and become localized structures moving in the flow, whereas in the MHD run a long filamentary structure forms. 2) The 'clumping' of material in the MHD run is enhanced, producing higher densities than in the Hydro run. A sink particle forms in the MHD run by frame 134, but does not form (at least by the final frame 140) in the hydro run. 3) The collision region seems bound in the MHD run, but not in the hydro run.
The right column shows the following: without cooling, we see the collision region expands (quickly) — increasing density leads to increased pressure, which forces the evacuation of the gas. In the hydro case, we see little localized clumps of lower density material form within the collision region. These structures do not form in the MHD runs, but rather we see again longer filamentary structures form there.
The boundary conditions are reflecting on left and right, and extrapolating on top and bottom. The behavior in the hydro, non-cooling case makes sense with these BCs, but I am not entirely sure what is going on in the MHD non-cooling case where there is some weird flow coming back into the grid on the top and bottom..
High Res 2D MHD Colliding Flow run
This is 64 + 3 levels 2D run with reflecting, B-parallel BCs on x1 and x2 (extrapolating on y1 and y2). I am seeing the cells reaching higher densities in this higher res run than in the lower res run, and a single sink particle by about 16 Myr (compared to 3D smooth hydro runs, tsink~20 Myr).
http://www.pas.rochester.edu/~erica/CFrho_reflectingBParallelBCs.gif
I am still NOT seeing striations with these boundary conditions, even at higher resolution here in 2D.
For comparison, here is the same movie at the original lower resolution. Note, no sink particles are formed here, and the max density is ~500-
http://www.pas.rochester.edu/~erica/CFrho_reflectingBParallelBCs_lowres.gif
2D MHD Colliding Flows and different Boundary Condition Effects
2D Mods to global.data
A 2D version of the colliding flows problem seems to be working. I just modified 3 points in the global.data file,
ndim = 2 !before was 3
…
mx = 64, 64, 1 !before was 64, 64, 64
…
xbounds = -25d0, -25d0, 0d0, 25d0, 25d0, 0.78125d0 !this has domain running from 0 to dx in z direction
Results
This reduces simulation time from ~hr. to ~min. While gravity in 2D might be wonky, this allows us to quickly debug the striations.
I checked that the striations at boundary still exist in 2D with extrapolating BCs. They do -
http://www.pas.rochester.edu/~erica/2DCF_rho_extrapBCs.gif
I next tried to change the boundary on the left and right sides of the box (x1 and x2). I find that different boundary conditions produce wildly different results in the box,
- Using the BCs "reflecting, B-parallel" reduces or nearly completely removes the striations. These BCs force the magnetic field to remain normal at the wall.
- Using "reflecting, wall" seems to create near vacuum states in the colliding flows. The lower densities seem to be greatly reducing the time step, making these simulations run quite long (~>hr). I don't see a change in the max speed in the standard out however, which I am curious about. This is clearly not right. I am not sure why the boundary condition is leading to this behavior inside the flow.
- Using "periodic" boundary conditions leads to very strange behavior as well. It seems to make sense that you'd expect this boundary condition and reflecting to produce the same behavior, but they do not. Again, I am not sure why the boundary is effecting the flow in this way.
Here is a movie of the density for the different runs (the simulation is 2D in the x-y plane) - http://www.pas.rochester.edu/~erica/all4BCs_CFrho.gif
I will make note of these effects and when I can look more deeply at the code's prescription for boundary conditions will check my understanding. For now, I am running the 3D version of the code with the reflecting, B-parallel BCs to check that they look good still.
Directory Locations
Build directory is @: /grassdata/erica/CollidingFlows/scrambler_3.0
Runs are @: /grassdata/erica/CollidingFlows/CollidingFlows/2D/MHD
Meeting Update -- 11/4/13 -- Erica
Working on readings and colliding flows simulations. I first ran a quick and dirty simulation with MHD on, and extrapolating BCs to reproduce the striations Jonathan talked about earlier.
MHD COLLIDING FLOWS:
Here are some movies,
normal slice (y-z plane is normal to flow direction) of the density and Bx-
http://www.pas.rochester.edu/~erica/normal_slice_BandRho.gif
parallel slice(y-z plane is normal to flow direction) of the density and Bx-
http://www.pas.rochester.edu/~erica/parallel_slice_BandRho.gif
here is positive vx on a log scale, both types of slice -
http://www.pas.rochester.edu/~erica/sliced_vx.gif
The large shock feature that blows backward off of the collision region jumps out at me. The sound speed is initially Cs = 117 everywhere for the 0th frame. Dt = 0.01. So the farthest a sound wave should have been able to travel is ~1 by the 1st frame. What is causing this blow off?? — This is wrong I think, because I didn't take B into account in the sound speed. . .
here is a normal slice of pressure -
http://www.pas.rochester.edu/~erica/normal_slice_press.gif
HYDRO COLLIDING FLOWS:
normal slice of rho-
http://www.pas.rochester.edu/~erica/normal_slice_rho.gif
vx -
http://www.pas.rochester.edu/~erica/normal_slice_vx_hydroCF.gif
press-
http://www.pas.rochester.edu/~erica/normal_slice_press_hydroCF.gif
to see that the striations are absent in the hydro set-up, see this parallel slice of vx -
http://www.pas.rochester.edu/~erica/parallel_slice_vx_hydroCF.gif
For the hydro case, dt = .8, scalegrav = .02, Cs = 90. The Jeans refinement is lambdaj = 4dx, solving for rho gives - (Cs/(4dx))2(pi/G) ~ 100,000 (increase of more than 5 orders of magnitude) before a sink can form…
Meeting update
Registered for Fire Down Below: The Impact of Feedback on Star and Galaxy Formation conference for April - http://www.kitp.ucsb.edu/activities/dbdetails?acro=stars-c14
Need to still apply to present a poster
Have some extra things to add to the Hypre Appendix - like the stencil, matrix equations, how we incorporate Hypre into AstroBEAR and the types of BC's available with the elliptic solver.
Been reading on outflows and colliding flows, namely Plunkett and Jonathan's papers so far.
Meeting update
Here is the working paper figure for the pressure approximation,
As you can see, 1/10 case (light blue line - accidentally not on legend) is no longer a valid approximation with this method. This makes sense when we consider the Jeans length of the ambient — the ambient is Jeans unstable, and so the ambient in this case collapses more like a uniform sphere. When I do a similar approximation for the pressure, only using the solutions given by uniform collapse, I get this,
From this plot, we see the same value of the 1/10 case as in the previous approximation, which seems to indicate this is some boundary case between the dense and light approximations — ambients equal and denser than 1/10 are better approximated by the uniform collapse. We also see the same grouping up of the lighter cases that we do in the sparse approximation, but they are grouped up around a different value. This indicates that the trend is robust — settling of the atmosphere leads to some external pressure threshold needed to induce collapse.
Here is a wiki page on the 2 different approximation methods I used to make these plots -
http://astrobear.pas.rochester.edu/trac/astrobear/wiki/u/SurfacePressureApprox
In running out the 1/100 case longer, did not get collapse. This may have to do with the scale height for this case extending beyond the box..?
I added the sparse approximation description to the paper, and worked on finishing up the appendices on self-gravity. Here is a completed paper draft,
http://www.pas.rochester.edu/~erica/BEsphere_EK_09.20.13.pdf
update
For a planar HSE atmosphere, with differential equation,
,
the solution is an exponential with given scale height,
where
Assuming the force is given by,
and Matm is the mass of the accumulated atmosphere, gravitationally attracted to the BE sphere through freefall,
(where rff is found by inverting the tff for r for 2 point masses)we get the expression for P = F/A at the surface (after plugging in the variables),
Using this expression gives the following results,
Vertical lines indicate when a sink formed for the corresponding run. The horizontal line is the initial external pressure on the BE sphere, which since the sphere is a critical BE sphere, equals the critical pressure.
We see that as eta = rho(Rbe)/rho(amb) increases (i.e. the ambient gets sparser), it takes longer for the pressure to grow at the surface of the sphere, inducing collapse.
Plotting denser ambient mediums skews the graph, as they increase much more rapidly on the y-axis.
Here is a table of the runs summary,
Things for me to consider:
Is this an adequate description of the problem, does it give good results?
How does box size come into play?
Should we do a different approximation for the dense cases?
Does this explain the qualitative results in the table?
Meeting update
I have been working through details of the derivation we discussed the other day. Please see my previous post on the pressure expression and my concern there.
In the dense ambient case limit, we talked about using a spherical wave that has had time enough to fall onto the BE sphere via freefall. For this, we considered Vff=Sqrt(GM/r) (which may be off by a factor of root 2), and said v=r/t, r=vt.
I think this may be better given by the following.
Imagine the following situation, m falling down to mass Mr from initial velocity = 0. For the case of spherical collapse, m is the mass of a thin shell, attracted to all the mass Mr within the shell by Gauss's law. None of the shells cross, so Mr is constant over the collapse.
By energy conservation, Ei = Ef, or
where r0 is the initial radius material is falling from, rf is the final radius (for our case, rf=Rbe). Solving for v, we have,
For the case rf << r0, the 1/rf term dominates, and this gives the expression,
The flow near the core, at late times of spherical collapse approaches this value (in both the LP solutions and the BE collapse solns).
In looking for R(t), maybe we should write it this way?,
i.e. the velocity of the material at a distance r0 above Rbe is the freefall velocity. Rearranging and solving for r0(t) gives,
which is to say, everything within this radius had had time enough to fall onto the BE sphere..
Analytical expressions for BE paper
Been thinking about the equations we were deriving the other day, and their origin.
Starting from the Navier Stokes equation,
We can solve for P in hydrostatic equilibrium (dv/dt=0), assuming P=P(z). A volume integral of the above equation then gives,
or
This says the Pressure at some depth is a function of the instantaneous weight of all the material on top of it.
From here, it seems we were thinking of extrapolating this equation now for the dynamic case,
where the bounds went from Rbe (radius of BE sphere) to Rbe + R(t), where R(t) depends on whether we are in either a subsonic or supersonic regime. (This integral would actually be in spherical coordinates, left in cartesian for simplicity).
Does this seem okay to people still?? I am not entirely convinced, and am having a hard time finding any discussion of the physics here. If it is okay, I can try to crank out some numbers for the expected P(Rbe,t), although I am not sure what function I would use for rho(r,t). Thanks
Update
Hypotheses,
- As the box becomes more and more jeans stable (i.e., the density is low enough that the major forces acting on the particles in the ambient is the gravitational acceleration by the sphere - i.e. Bondi flow), for a fixed volume of the ambient the sphere will go unstable on a timescale given by the bondi flow (such that an appreciable amount of matter has been accreted by sphere). In order to test this, need a time scale for which the sphere goes unstable (how to define this?). Then, compare to simulations. For this limit, the amount of mass in the box is also a strong factor in stability of the sphere, so varying the box volume changes the stability of the sphere.
- As the box becomes more jeans unstable (i.e., density is high enough that the box is well approximated by a uniform sphere and collapses as such), the amount of mass in the box, over some length scale, becomes less important given the ambient collapses on its own. This case leads to strong compression waves on the sphere.
Questions to address
What is the appropriate 'collapse time' of a BE sphere? A's - 1) time it takes for the BP case to collapse to sink, 2) Average free-fall time, 3) dynamical time
Is the collapse time a constant? That is, once the BE sphere goes unstable, does it always collapse in the same amount of time?
Movies of pressures at BE boundary for different limiting cases
1/10, bigger (looks the same)
Also, attached is a spreadsheet pdf of the runs and whether there was collapse and times
Is time to go unstable some constant fraction of the free fall time of the ambient?
Attached PDF with calculations.
Basically it does seem a pretty good approximation that the collapse time for the BE sphere is constant (t~0.2) for the non-compression wave cases. I checked this in the simulations, using the time it took between sink formation, and monotonic increase in density past original values (since in non-compression wave cases we see breathing modes). On the other hand, it seems not as straight-forward to gauge collapse time for strong compression wave cases. If I take it as beginning at the time I see monotonic increase of core densities above initial values, the compression wave cases lead to short collapse times.
No clear trend in the fraction of free-fall times is observed. You can check the pdf values in the attachment.
BE turnover updates/discussion
The time interval we chose to test stability of the BE sphere in the light case was 5 sound crossing times of the SPHERE. Perhaps if we had known better at that time, we would have chose some time-scale for the ambient. But, we were interested in the stability of the sphere, so it seemed reasonable to choose that timescale. 5 sound crossing times = 1.1 for the sphere. It is different for the ambient in the different cases, given the change in ambient sound speed.
In looking for the turn-over, I found the 1/50 case did not collapse in t=1.1 (which is 5 tsc for the BE SPHERE), but the 1/45 case did.
A closer look at the timescales operating in the AMBIENT show why.
For the ambient, tff=1.6, slightly longer than the simulation time of t=1.1. Perhaps the ambient did not have time enough to collapse sufficiently so to trigger collapse onto the sphere. Running the simulation out longer showed the ambient to collapse far enough down to trigger collapse of the sphere. I found it only needed a little longer, and the BE sphere collapsed fully to a sink particle by t=1.4.
A movie (http://www.pas.rochester.edu/~erica/50LineOutLonger.gif) shows some interesting behavior of the ambient in this small time period between t=1.1 and 1.4. It is also of interest that the BE sphere collapse is triggered on a timescale similar to the timescale of the ambient's gravitational infall. Does this trend hold up in the different cases?
In checking the tsims compared to the tffs, I see that the trend holds for jeans stable mediums, but that as the medium becomes jeans unstable, the simulation time becomes much shorter than the free-fall time of the ambient. Maybe this is because the time-scale for the jeans unstable medium is more accurately given by some faster timescale?
Why did 1/45 (the seeming turn-over point) collapse in the time interval t=1.1? A look at the table, http://www.pas.rochester.edu/~erica/BEcalcs.pdf, shows that for the ambient, tff=1.4, a little shorter than that of the 1/50 case..
I think this table shows some interesting trends. Also, it is interesting to consider the timescale of collapse for the BE sphere once the BE sphere goes unstable; is this a constant throughout the different runs? That is, say we take the collapse time to be the time it takes the unstable BP case's sphere to collapse (~.2), do all spheres collapse in t=0.2 (i.e. it takes t=tfinal-0.2 time for the sphere to go unstable). It is interesting that this timescale for the BP case is on order the sound crossing time for the sphere, as well as the average free fall time of the sphere. Other authors have suggested a more correct dynamical timescale is given by the thermal radius/the sound speed. When I calculated this however, it was was way off from the simulation time. I am not sure why.
As for the Jeans length of the ambient (calculated at t=0), the table shows that when the jeans length is ~2-10 smaller than the box size we see strong compression waves. When the jeans length is comparable to the box size, as in the 1/10 case, we see a turn-over in the type of collapse, changing from a strong compression wave early phase, to a redistribution of matter. I am curious if in a case like the 1/10, if I make the box 2-10x larger than the jeans length, will the collapse be qualitatively different.
Lastly, we talked about increasing the size of the 1/50 box so that it is strongly jeans unstable. On first thought, I would expect ambient collapse to happen on timescales given by tff, regardless of the size of the box. This seems justified given the 1/50 case is "jeans stable", but still collapses, and on a timescale specified by the density of the ambient. However, given the jeans stability of the ambient seems tied with the generation of strong compression waves, I am inclined to think that the timescales may change from a jeans unstable to stable medium. This is at odds with the assumption that within our box, we can approximate the collapse of the ambient as a uniform sphere with collapse time on order the free fall time.
Another point we have talked about before that I am not entirely sure of is this, given the matter in the box is finite and contained inside the box (i.e. we are not supplying a continuous flow of matter into the box at the boundary) might the ambient material see the edge of the sphere as a wall and build up on it in some way as to move toward a new HSE? In contrast, if we were continuously supplying new matter into the box, it seems this wouldn't happen because the matter will always be falling down to the BE sphere, and will eventually trigger collapse.
Maybe this equilibration of the ambient would be possible for a very stable BE sphere configuration..?
So in summary, there seem to be 4 points here I am making,
- How is the Jeans stability of the ambient tied with physical characteristics of the BE sphere collapse, such as early phase type, i.e. compression wave vs. matter redistribution
- How is the Jeans stability of the ambient tied with the timescale of the ambient collapse
- What is the timescale the BE sphere collapse operates on
- Can the ambient move to a new HSE with the sphere?
Meeting update
- Poisson solver update - previous blog post http://astrobear.pas.rochester.edu/trac/astrobear/blog/erica09192013
- Ran rho amb = {1/60, 1/50, 1/40} rho(Rbe) simulations checking for point where the density of the ambient is just enough to force collapse. Found that 1/60 and 1/50 are too light to collapse, 1/40 collapses. I am running 1/45 now, will be done by morning.
- New intro finished for BE paper. The methods section (apart from description of physical characteristics of clump - such as massive enough to be site of massive star formation) is done as well. These can be read in the pdf that is here- http://www.pas.rochester.edu/~erica/BEsphere_EK_09.20.13.pdf. Still working on this list for paper edits:
- Discuss the turn-over, its implications, and analytical reasoning
- Change wording from isothermal general thing, to more specific case in conclusions/modify conclusions
- Add a discussion section
- Add an appendix on HYPRE and its use in astrobear
- Massive star forming cluster… add to methods the physical properties our clump has
Athena more user friendly than Astrobear?
I heard through the grapevine that Athena is more user-friendly than Astrobear, and this has to do with the ease in getting it on a personal machine and being able to run it (within 5 minutes), and also that published tests are incorporated into the code, tests that can be run by the user and are referenced in the literature..
Makes me wonder if we should spend some time refining/condensing our intro user material, and making it easier (and faster) for new users to get the code up and running.
Update on Poisson solver/operator splitting
The code runs and takes the 1st time step (hydro+source step), after which it chokes during the next hydro step. I checked the initial data for scaling (see https://astrobear.pas.rochester.edu/trac/astrobear/wiki/u/erica/TestingOpSplitting - note at this small of a perturbation (delta = 1e-6), the pressure is a constant because of rounding error), and output from poisson solver during that first step. Here is the poisson output-
phi -
discrete 2nd derivative of phi -
forcing function -
The convergence requirement, (max(abs(u(n+1)-u(n))) < 10e-5, is reached, although the convergence between the actual solution and the numerical solution seems spotty in areas (as seen when comparing the discrete 2nd derivative to the forcing function),
The code goes through the first time step and produces this behavior,
density -
velocity -
pressure -
It seems to be qualitatively correct in density and velocity. Gamma = 1.4, and so am not immediately sure if the pressure behavior is as expected..
Potential bugs are,
- gamma=1.001 (for which my formulas for jeans mode are derived) diverges before can complete the first time step, but using gamma=1.4 gets through the first step
- not sure on u in energy source term update formula (see wiki page on that - https://astrobear.pas.rochester.edu/trac/astrobear/wiki/u/erica/PoissonPlusHydro)
- Scaling, either in the code or scaling the constants such as scalegrav (also here- https://astrobear.pas.rochester.edu/trac/astrobear/wiki/u/erica/TestingOpSplitting)
- making some modifications to the routine where I am getting divergence, such as increasing the number of iterations, or how it is calculating an initial guess at pstar
- The convergence of poisson solver is not good enough
Meeting update
Here is my page on operator splitting, with derived update formulas I am using-
https://astrobear.pas.rochester.edu/trac/astrobear/wiki/u/erica/PoissonPlusHydro
Here is my page on setting up the Jeans instability for my code, including how I scaled the gravitational constant,
https://astrobear.pas.rochester.edu/trac/astrobear/wiki/u/erica/TestingOpSplitting
The results section is not filled in, as I am still working out some kinks in the code. There seems to be a bug somewhere that is causing nans…
Along with this, will be running some new BE sphere simulations, and preparing for the MHD colliding flows simulations, as well as editing the paper with Phil's suggestions.
I am not sure why this code is not giving me an oscillation in p init (W3 array),
DO i = 1, cells
x = REAL((REAL(i)-0.5)*dx)
W1(i) = rho0 + delta*Cos(k*x)
W2(i) = -(delta*growthrate*lambda)/(rho0*2*Pi)*Sin(k*x)
W3(i) = p0 + delta*((W1(i)*rhoscale*Kb*100)/mh)/pscale !ideal gas law in cgs for pressure, over pscale
WRITE(85, *) X, W1(i), W2(i), W3(i) !Compute conserved quantities
CS(1,i) = W1(i)
CS(2,i) = W2(i)*W1(i)
CS(3,i) = 0.5*w1(i)*w2(i)*w2(i) + w3(i)/(gamma - 1.0)
END DO
BE paper
The paper is attached to this blog post.
Meeting update
I have been reading some material on outflows, including PPVI chapter and Plunkett. My code is done except for some scaling that I am still trying to work out. Will have a page on the operator splitting when it is all done. I looked at the poisson solver performance for the periodic bc's of a 1D sine wave. Here is a plot of the difference between the discrete 2nd derivative and the initial density distribution, as a function of position. I am not sure why the low values of the error right at the ghost, physical cell boundary, but other than that the trend seems good — highest error near the edges, and smaller as you move interior. Also, I uploaded my poster to PPVI website.
Order of accuracy
Hi all, here is a plot showing the error I computed for my Poisson solver.
The equation it is solving is:
Laplacian(u) = -Pi*ex*[Pi*ex *Cos(Pi*ex) + Sin(Pi* ex)]
The analytic solution is u = Cos(Pi*ex)
I calculated the error for different dx, using E = uapprox - uexact
where uapprox is the numerical solution at x=0 and uexact is the exact solution at x=0 (=-1)
The error is supposed to go like:
E ~ (dx)n
where n is the order of accuracy.
After computing E for the different dx, I plotted it against what a 1st, 2nd, 3rd order accurate scheme would produce for the given step-size dx:
This seems wonky. The trend seems to start out near 2nd order accurate, but as I reduce dx, it begins to diverge from it. This is the opposite of what I would expect to happen…
I increased the zones in the following sequence, (25, 50, 100, 200, 400, 800), and computed the error at a fixed x=0. I wonder if changing x would give better results… either a different fixed position along u, or a fluctuating position, where the tolerance criteria was minimized…
*UPDATE*
Okay, so I checked in 3 different norms whether this trend still happens, and I see it still… Basically it seems to be 2nd order until the resolution gets small, where the error begins to increase..
Meeting update
I had so much stuff floating around in my head, I wanted to record it all before I lost it … Here is a page on most of all that I learned in the Poisson solver problem — https://astrobear.pas.rochester.edu/trac/astrobear/wiki/u/erica/PoissonSolver
I would still like to add my references and such to it (for future reference should I need them), and try to answer the questions down at the bottom I posted.
I also chewed over how to add the solver to my 1D fluid code and check the Jeans instability growth rate with it… I am a bit stuck here, although I have some ideas. Will share with the group in the meeting shortly.
Meeting update
- 2D code results and page updated - https://astrobear.pas.rochester.edu/trac/astrobear/wiki/u/erica/2D_Godunov
- Read up on elliptic equations and numerical methods for solving them
- Am planning on finishing the Poisson solver this week and learning about Hypre
Getting in touch
Hi, are you all on skype? My name is erica.kaminski1012
Meeting update- 2D code progress
I have code that now:
1) sets up a 2d mesh with the cylindrical explosion initial conditions
2) prints to file the initial conditions that can than be visualized (easily… ) with mathematica:
Contour plot of initial density, the density (and pressure) are higher inside of the circle:
Here it is in a 3D plot:
and the initial velocity (both in x and y) is zero everywhere on grid:
3) determines the cfl condition, and dt for the time step considering the max speed of waves traveling in either x or y
4) sweeps the grid in the x-direction and solves the local Riemann problem along intercell boundaries in x, taking into account the presence of shears
What I am stuck on doing now is writing these routines only once, so that it can be called for either the x or y sweep, with variables swapped for x and y… I could write the routines the brute force way, so that they are individualized for x and y directions (e.g. Call Sweep in X, Call Sweep in Y), but this is silly, since the equations have the same form in x and y…
I am trying to figure out how to write the routine just once, but as of now my routines involve nested do-loops that cycle through the mesh. By nesting the loops like this, it seems to define a preferred direction for the Riemann problem… one that doesn't seem to be easily swapped if I switch from x to y.
So, that is where I am — trying to write these routines the proper way, one time, and have them called with the variables swapped for x and y directions..
I also spent some time trying to figure out how to visualize the 2D data. I played with importing the 2D array data into the spreadsheet, but the data set was too large for the program to handle easily. This is what it looked like in the spreadsheet initially:
This spreadsheet's cells correspond to points in the mesh, and their values are the value of density at that mesh point. This is why we see the circle in the center of the spreadsheet, the values of these cells have fewer digits than the surround, so it looks like a circle. If the widths of the rows and columns were the same, we would see an actual circle rather than an oval.
I thought if only I could assign colors to the values in the cells, like Visit does in the mesh, it would be like a 2d psuedocolor slice. I tried getting the data into a form Visit liked, but this was time consuming to figure out, and not much information on this on the web.
Then I resorted to mathematica, which was a much easier time once the output was formatted correctly (as a list of (x,y,rho) values: (x1, y1, rho1), (x2,y2, rho2), etc.) These are the contour and 3D plots above.
Update
The fix seems to be to make the problem 1D. That is, in the x- or y-sweep, make 1d arrays corresponding to the row (or column) you are sweeping along, and feed this 1D array into a routine that 1) solves the local RP at the intercell boundaries, and 2)updates the cells along that 1D array using the Godunov conservative formula. By making 1D arrays, and isolating them from the rest of the mesh, the array has no preferred orientation relative to the rest of the mesh. Will work on implementing this fix to my code.
Meeting update
Please see my previous post for the appendix write up for the self-gravity TEST (https://astrobear.pas.rochester.edu/trac/astrobear/blog/erica07272013)
Please see my previous post for the different types of curve to visualize the growth rate of jeans instability https://astrobear.pas.rochester.edu/trac/astrobear/blog/erica07262013
I am working on my 2D code. I made a page here https://astrobear.pas.rochester.edu/trac/astrobear/wiki/u/erica/2D_Godunov
I don't anticipate it taking too long to finish, and will post the results there.
Also, I got this email last week and wonder if it should be looked into or ignored???
Dear Erica Kaminski,
According to the University of Rochester's online repository you wrote a paper entitled "The effect of the ambient medium on the collapse of a bonnor-ebert sphere: deviation from the canonical outsidein collapse" during your studies.
I am an Editor of LAP LAMBERT Academic Publishing, an international publisher specialized in the publication of research papers.
We would be pleased to consider integrating the above mentioned work in our catalogue of printed books.
In case you are interested in our offer please contact me at your earliest convenience. I will be pleased to send you a detailed brochure for your perusal.
I am looking forward to hearing from you.
Kind regards
Irina Rusu Acquisition Editor
LAP LAMBERT Academic Publishing is a trademark of: AV Akademikerverlag GmbH & Co. KG
Heinrich-Böcking-Str. 6-8 66121, Saarbrücken, Germany
i.rusu@lap-publishing-house.com / www.lap-publishing-house.com
Handelsregister Amtsgericht Saarbrücken HRA 10356 Identification Number (Verkehrsnummer): 11868 Partner with unlimited liability: VDM Management GmbH Handelsregister Amtsgericht Saarbrücken HRB 18918 Managing directors: Thorsten Ohm (CEO)
Self-gravity test for appendix write up
I didn't write the implementation part of the appendix yet, as I think it may be easier once I learn a bit about the gravity solver techniques, and HYPRE (within the next month with my own code building). I made a short outline of what I think this appendix might include, and have built it into the pdf that is here:
http://www.pas.rochester.edu/~erica/BEsphere_AF_07.11.13.pdf
On the other hand, I did write up the test for self-gravity, using the Jeans instability growth rate stuff I have been talking about lately. That is built into the pdf as well. As far as the figures for the section go, I just thought to use both the perturbation vs. time graph and the slope graph from blog post:
https://astrobear.pas.rochester.edu/trac/astrobear/blog/erica07262013
This however, is up to wiser judgment than myself.
A closer look at Jeans Instability plots
First is a plot of the total density at a peak (i.e., a peak is a region that is growing in density over simulation). The analytical function for the density is:
where
is the background density, is the density perturbation, and is the growth rate, calculated to be ~628.8 (in computational units).At the peak, the Cos(x) function is just 1, and so in scaled units, the function becomes:
Since this is not an exact exponential, it does not appear as a straight line on a log-linear plot (see for reference). Here is the data output from astrobear compared to the analytical function, on a linear-linear plot:
Now, since the perturbation
is just an exponential (given by above), it looks like this on a semi-log plot:The slope of this line, on a semi-log plot, gives the power 'e' is raised to. That is, since
, . Here the log is base 10, NOT e, so log(e) DNE 1. This is an equation of a line with slope = log(e)*628.8, or ~273. Thus, if one plots the perturbation on a semi-log plot, and calculates the slope — it should give back this number - m=273.When calculating the average slope using log(y2)-log(y1)/x2-x1, I get a number very close to 273. This is good. But perhaps more descriptive is a plot of the instantaneous slope of this line - to see any deviations of astrobear's output from the analytical curve. To make this plot, I calculated the derivative of the log of the analytical function:
(as expected), and plotted this against the instantaneous derivative of the data (calculated using log(i+1)-log(i) / log(j+1) - log(j) in my spreadsheet). This is what I get:
As we see, initially the data from astrobear lines up very well with the analytical result, but over a couple of e-folding times, the rate of change increases compared to the analytical curve. By 5 e-folding times, the difference is ~10% (easily seen in the first plot). If we consider the error from the non-linear term to grow roughly as
, and take t = 5/628.8 (5 e-folding times), then the error is ~2%. This seems to match the data. This means that by a few e-folding times, the non-linear term becomes significant, and so we can no longer assume a linear perturbation to govern evolution.Gammas and polytropes..
Is the gamma from the polytropic index, the same as gamma in astrobear. I think gamma in astrobear is the 'adiabatic index', a ratio of specific heats — but this is not the case for the polytrope according to this -
http://en.wikipedia.org/wiki/Polytropic_process#Notation
and here is a write up detailing the math
Meeting update
Here is the new Jeans instability calculation of the growth rate
- periodic boundary conditions, both physical and elliptical
- using expressions given here: https://astrobear.pas.rochester.edu/trac/astrobear/wiki/u/erica/JeansTest
- tfinal = 5 e-folding times
- isothermal EOS
I see that the code reproduces the growth rate well up until ~4 e-folding times. But by 5 e-folding times, the difference is ~10% in the perturbation. This is much better than it's been: https://astrobear.pas.rochester.edu/trac/astrobear/blog/erica06262013
I could fidget around with the code a bit more, if necessary - Is this good enough for a test of self-gravity for the paper?
Also, here are fun plots showing the velocity and density perturbations on the same graph
t = 0:
Vx is in red, rho is in green. The density peaks at the zero's of Cos(kx). E.g., a local max is at 1. The vx perturbation is a Sin(x) function, and it has minima that corresponds with the density maxima. This produces converging flows at the max in density, as you would expect you need to develop the Jeans instability.
t = tfinal:
By the end of the sim, the density perturbation has grown a little, but velocity has grown a lot.
Poster
Attached to this blog post
Meeting update
Last week was spent revising the paper and designing my poster. I am trying to get the poster ready in the next day or so, but am having difficulty with the template Shule sent out. I can't get it to compile without a load of errors, and the .dvi is a crumpled mess of text. I am hoping to get some help with this today or tomorrow so to expedite things. Once this is done, I'd like to finish up the Jeans code this week before leaving town for PPVI.
Meeting update
- Working through some 2D theory/numerics on Euler equations
- Working through Jeans instability mods. Updates soon. Trying to get the right form of the perturbation expressions, as well as debugging
Checking growth rate
Here is the theoretical growth rate (labelled mma) compared to AstroBEAR's (labelled visit) over 1 e-folding time-
They are within ~0.1% of each other by the end of the sim. Is this close enough, i.e. what we should expect??
Here is the same set up, over 5 e-folding times:
By the end of 5 e-folding times, however, they are within 5% of each other.
How does this data seem as a test for self-gravity in the code?
The initial perturbation is small, with amplitude 0.001*rho0.
Meeting update
Updates from last week:
- Pretty close with the Jeans Instability module. I wrote a page on the theory of the instability here- https://astrobear.pas.rochester.edu/trac/astrobear/wiki/u/erica/JeansInstability and a page on the module I wrote here- https://astrobear.pas.rochester.edu/trac/astrobear/wiki/u/erica/JeansTest. There is still a needed modification to the analytical function that I am working on. Given the initial perturbation is sinusoidal in x and exponential in t, I think what I need to have my module do is: 1. at t = 0 assign rho(x,0) = rho0 + 0.1*rho0*cos(kx) (i.e. prescribe a small amplitude perturbation, 2. for later times t=t', have the density distribution continually perturbed by an exponential function of t'. This I believe should take the form: rho(x,t) ~ rho(x,0)*exp(4*pi*G*rho0)t.In this way, the amplitude of the initial perturbation should grow in time and have a growth rate ~ 1/tff. The next thing I need to do then in the problem module is to define and store an array of initial rho. To store this array, I added it to the Info structure. However, I am having difficulty populating this array with Info%q(i, :, :, 1) in the problem initiation subroutine in problem.f90. It seems that this is a fortran language error I am making, but I should ask, if one would like to store information on fluid variables from a given time level, is this a reasonable place to store (inside of the info structure)?
- Plot of xi(t) for the intermediate 1/30 case- . Also, after thinking more of the critical pressure, I have convinced myself that it does change with time over the course of the sim, as the parameters of the BE sphere change.
- Read chapters 15 and 16 of Toro. These are 'splitting schemes for PDEs with Source Terms', and 'Methods for multi-dimensional PDEs', respectively. I think the general method for multi-dimension schemes is a VERY straight-forward extension of the 1D schemes. However, I still need to delve a little more into 1. how the additional (shear) waves are handled numerically — this involves reading up on how to mathematically/numerically separate the different dimensions in the eigen system, 2. How the geometric source terms can reduce the dimension of the problem, while maintaining mathematical equivalence to the higher dimension Cartesian counterparts.
Upcoming week:
- Journal Club - Jeans Instability??
- Write a 2D code (unsplit finite volume or multidimensional split scheme?)
- Perform a test with it (thinking the 2D cylindrical explosion in first section of Ch. 17). If I do this test, I need to work through the math of how the 2 sets of equations are equivalent (the 1D + source term of cylindrical and the 2D Cartesian version), and I need to make sure I can solve the 1D "numerical" test of this problem (again, this is devising a scheme that can solve for cells in r using a split scheme to handle the geometric source term) to compare with the 2D results.
- Finish the Jeans instability code and make figures for appendix
- Pcrit/P(r) plots? For which cases?
- Poster?
Meeting update - 6/17/13
- Alfalfa needs a new video card pretty bad. Rich says he can order one, but needs the account number.
- Worked on revisions for main body of paper. The updated pdf can be found here — http://www.pas.rochester.edu/~erica/BEsphere.pdf I ran into some trouble describing the last intermediate run (the lightest ambient medium). At first we talked about the sphere slowly acquiring more mass than the critical level and going over the edge. Then we talked about the ambient equilibrating to a decaying exponential, and that resulting surface pressure exceeds threshold Pext. After looking at the mass of the sphere over time, I would say that this later point is the case. Not only is the sphere initialized with subcritical mass, but it never again reaches its initial value. This is slightly concerning to me, given collapse ensues.
Here is an image of the calculated values from visit-
I got these values of mass from the sphere by doing the following - 1) looking at the lineouts of density and determining the position of the discontinuity (xdisc), which we are calling the radius of the sphere, 2) calculating the total mass within the sphere of r=xdisc with a query function in visit.
I noticed that the only sphere that is actually initialized to be at the analytical critical mass is the Matched case. The others have decreasing Minit with decreasing ambient densities. This seems to be related to smoothing of the BE-ambient boundary.
- Wrote a Muscl-Hancock scheme for the 1D Euler equations. The code runs correctly, and will be posting my notes and results to this page here later today — https://astrobear.pas.rochester.edu/trac/astrobear/wiki/u/erica/MusclHancock
- We talked about going on to chapters 15-16 of Toro. Chapter 15 talks about source term implementation, and chap 16 goes through multidimensional schemes. Should I next be focusing on source terms and self-gravity, and then follow it up with multidimensional schemes? Or the other way around??
Meeting update
My Roe solver + entropy fix is working now. Here is the latest updated page -
https://astrobear.pas.rochester.edu/trac/astrobear/wiki/u/erica/RoeSolver#Thecode-EntropyFix
I am out of town until Wed., but plan on continuing my Toro tour de force until I get back.
Next step for the Roe solver is importing it into astrobear. Will ask around to see what this entails.
Meeting Update
- Boundary Effects Page for BE runs - https://astrobear.pas.rochester.edu/trac/astrobear/wiki/u/u/erica/BEboundary - I added to this page my current understanding of boundary problems with simulations, as well as what I am seeing in my sims. If anyone could help refine this understanding, will be appreciated! Thanks
- Roe Solver with Entropy Fix, is working on Tests 1 and 3, but not 4 and 5 yet -https://astrobear.pas.rochester.edu/trac/astrobear/wiki/u/erica/RoeSolver
- Things currently on my plate: debug code, jeans instability sim, BE paper analysis.
Meeting update
- Wiki page on Chapter 10 (HLL and HLLC solvers) — https://astrobear.pas.rochester.edu/trac/astrobear/wiki/u/erica/ApproxFlux
- Wrote a Roe solver without entropy fix, but results are wonky — https://astrobear.pas.rochester.edu/trac/astrobear/wiki/u/erica/RoeSolver
- Removed chaig and eschroe files from grassdata, but don't have permissions to remove yirak's
Read Toro Chapter 10
A brief wiki page on the subject here - https://astrobear.pas.rochester.edu/trac/astrobear/wiki/u/erica/ApproxFlux ' Next is learning about the ROE solver, which seems to be based on the HLLC paradigm.
Update
- Read Toro chapter 9, notes here - https://astrobear.pas.rochester.edu/trac/astrobear/wiki/u/erica/ApproximateRS
- Almost through Toro chapter 10
- Astrobear movie is done (or near done?)
Star Formation Jamboree Notes
*Fred Adams suggested non-zero velocity initial condition for collapse problem:
- there are semi-analytic solutions for the density distribution
- forms something 'like' a BE sphere
- suspects compression wave won't be set up at boundary
- more realistic then a hydrostatic clump, because he said you wouldn't form a clump in HSE with no motions — not sure about this point
- he would be interested in seeing these results
- talked about filaments feeding into clusters, on scales ~2pc from cluster that is ~2pc across
*Ralph Pudritz said he's been working on:
- making a 'super cluster' sink particle that randomly samples an IMF, produces radiation to provide feedback to parent cloud
- radiation modeled by using a combination of ray tracing and flux limited diffusion to resolve feedback on scales from GMC down to large clumps
- recently modeled the formation of GMC's from gravitational instabilities in galactic disk, & has online database of these clouds and their various properties
- uses Flash
*Hennebelle collaborator (don't remember name) works with BE spheres in turbulent, magnetized environments
Meeting update
- I have put together a near compendium on learning the Godunov method, my results, and discussion here - https://astrobear.pas.rochester.edu/trac/astrobear/wiki/u/erica/GudonovMethodEuler
- I have learned A LOT, see previous post also — https://astrobear.pas.rochester.edu/trac/astrobear/blog/erica05032013
- I am scheduled to shoot interview for astrobear movie on Friday.
- Conference went well, here are some notes I took on my discussion with Prof. Adams and Pudritz - https://astrobear.pas.rochester.edu/trac/astrobear/blog/erica05092013key
- Will begin next steps on my summer numerics schedule now
Code is working now :)
Happy to say the bug(s) have all been tracked down and eradicated. I posted results for test 4 on my wiki here — https://astrobear.pas.rochester.edu/trac/astrobear/wiki/u/erica
But the page is still under construction. Would like to take another day to type up my findings and chew over the results.
I'd also like to say… Running my code produces standard out like astrobear… printing the iterations over cells and over time. Running the program takes a decent minute to get through all the calculations, through all the loops and such. Very neat. I can see computational cost already; it is impressive to imagine how fast the cost must grow by adding more dimensions, grids, and physics, like astrobear.
The week in review
- Will be meeting to shoot the footage for astrobear PR movie this Friday, going to do some discussion of multiphysics apparently
- Made a slide for the Star Formation Jamboree — should I include the BE plots I did? Should I include other names on this slide?https://astrobear.pas.rochester.edu/trac/astrobear/blog/erica04272013
- Finished the God. code. It was not exactly as trivial as I thought it would be. I practiced utilizing some different pieces of code, namely the Exact Riemann solver I already wrote for Euler's equations, using the module function of fortran for practice. To compile these separate files, I did 'gfortan file1.f90 file2.f90 -o program_name'. This command takes care of both compiling and linking the files into the executable program_name. Care must be taken that files are compiled in order such that the first ones are called by the later ones. The code I wrote is accessible from my God. page, here: https://astrobear.pas.rochester.edu/trac/astrobear/wiki/u/erica/GudonovMethodEuler Things compiled, but I am getting a Nan somewhere early on in the program. I am now debugging the code, which is good practice for tracking down Nans in astrobear. Will like to update that page with some notes on the code, and figures of the results, once I correct the bug.
Update
Finished reading through chapter 6 of Toro, here is the page I am creating for the Godunov method for Euler equations.. I know all the steps now in writing the scheme. I will next have to work on writing the program that will utilize both the conservation form of Godunov method, with the exact RP solver I used before. Once this is done, I will compare to Toro's output.
https://astrobear.pas.rochester.edu/trac/astrobear/wiki/u/erica/GudonovMethodEuler
meeting update
Hi everyone,
I was out of town last week until Wednesday, and have been focusing on some science outreach stuff in the days since. I will be giving a talk at MCC tomorrow to 3 different classes of science students at noon. This talk is going to describe astronomy, star formation, computational techniques, and astrobear. If anyone would like to attend, it is in room 8-100
I have signed up for the protostars and planets conference, but am still curious about lodging and flights. Should we go ahead and book these? Any conditions we should be looking for?
Plan to have my new coding stuff up on the web tomorrow or Tuesday.
Best wishes
Meeting update - 4/8/13
- Registered for star formation jamboree and hotel — staying Sunday and Monday night — have to submit an abstract still and 1 pg introduction slide
- finished paper (new abstract, introduction, discussion sections)
- Potential BE sphere plot from previous post — on comparing new modified BE sphere from the sims
- Made some progress in reviewing next chapter in Toro, will begin coding this week..
Thinking of a stability analysis for modified BE sphere phase of Matched case...
I am thinking about some analysis Di Francesco did and how we may adopt a similar, but potentially more correct way of doing it…
In assessing the stability of the 'modified' BE sphere in my simulation, i.e. the post-compression wave phase, where the BE sphere has seemingly re-equilibrated in a smaller volume at higher densities, I am curious if there is a better way to evaluate xi for this modified BE sphere and if it exceeds xi_crit = 6.5.
My idea was as follows… I could gather from the plots I already made the new central density of the modified sphere, and the radial extent (i.e. from flat inner core region to flat envelope region). Then, initialize a new BE sphere with this central density, and radial extent, but with different xi's. The goal would be to see if we could 'fit' the profiles. Only if the profiles were within some small margin of error, would I feel comfortable using the definition of xi and xi crit in evaluating stability as Di Francesco did…
The idea is based on this plot:
Here we see how different xi's (given same rho_c and r) are in HSE of varying slope… Steeper slopes are in unstable HSE as the top curve is the critical value of xi. We would like to try to fit one of these unstable curves to the center, over dense modified BE sphere if possible…
In attempting to do this, I am still trying to work out scaling correctly, but here is a first shot:
Obviously the pink is the data from my matched run, from the chombo that began to resemble a supercritical BE sphere… Note there is a slight bump in it still, perhaps a remnant from the compression wave phase…
Meeting Update
Some comparison plots for Whitworth — (these are a fixed radius, over time, rho)
Includes 2 radii, r = 0.5 and r = 0.3:
Outside-In collapse not unique to "Bonnor Ebert Spheres"
Uniform spheres also collapse outside in —
From Larson's 1969 Paper on similarity solution for uniform sphere:
In this figure, we can see the same density profile we see from my perturbed BE sphere developing, but from a uniformly dense IC in this case… The collapse is highly non-homologous overall, inside regions are collapsing but outer regions aren't… Put another way, only the interior core region is collapsing homologously… What types of collapse are homologous I wonder..?
Curious to check rho and vrad in our code, it seems that indeed outside-in collapse occurs for overdense uniform spheres as well:
From our code —
RHO:
VRAD:
Huh, this is interesting… I am sure this has to do with the fact that Whitworth's solution encompasses all collapse — from LP (uniform) to Shu (singular) to Hunter (critical BE spheres)…
Meeting update
Last week I spent some time reading up on polytropes — both theory and an approximate analytic solver for Erini's code. I also worked on debugging my sink particle, although haven't posted updates on that. Mostly I am going to put that aside until the paper is finished, as it is extra material for that work. Will take the next week to read materials for my paper and begin coding Godunov for Euler equations.
Debugging sink hack
I am trying to debug my sink hack. I switched on the flags:
FFLAGS=-g -traceback -check bounds -check uninit -check pointers -ftz
When I run astrobear made with these flags, the code dies immediately saying the following:
When these flags are not turned on, and instead I use the original makefile line:
FFLAGS=-O3 -xP
The code runs but then dies a few frames after the sink starts accreting..
Update
- I added a sink to the grid and detailed the process in earlier blogs from last week… This was pretty straight forward after I figured out the process. The sink was successfully positioned at the origin at the start of the simulation, and became Jeans unstable at the same time a sink particle in the previous simulation was born. It began accreting then, but 3 frames past accretion the simulation encountered nan's and the code choked… I will look into this further today and tomorrow..
- I have been looking into additional resources for the paper more closely. I read Hennebelle again last week in more detail and wrote up a critique/description of it here: https://astrobear.pas.rochester.edu/trac/astrobear/wiki/u/EricasLibrary. I thought this would be a clear concise way to discuss papers for the paper with co-authors… I am close to writing the discussion.
- Here is our movie
A few questions: Riccardo Betti had to cancel our interview today for the AstroBEAR video. He is the last interview that I need, and he offered to reschedule for sometime next week. However, before I try to schedule a new time, I wanted to send you a cut of the video as it currently stands. Please take a look at the preview and keep in mind that I still need to add supportive shots of Martin working with the code, which will hide some of the jump cuts: Overall, is this what you were hoping for? Do you like the style and have I covered all of the important areas? I'm a little concerned that the current length is at 4:43. With the addition of Riccardo's footage, this will bump it over the 5 minute mark. Some audiences have trouble watching videos over 5 minutes. Thoughts? Do you still want Riccardo's voice in the video (to represent a user outside of the group), or is it good as is? This might push your public release date back slightly. Do you have any interesting videos representing a collapsing cloud that I can include during Adam's introduction? Can we make the bear more professional?
- I will be giving a talk at MCC in a couple of weeks on star formation and our research group.
- Coding project is likely to pick up again this week, I had to spend sometime last week revisiting some earlier chapters as I seemed to have forgot some characteristic properties of the PDE's, useful for approximation methods…
Adding a sink particle to the grid
The easiest solution to get what I am looking for (a sink at the origin that accretes material in a spherically symmetric manner after the center most cell satisfies the Fedderath conditions) is to hack astrobear rather than apply fixes and checks to the code for particles near reflecting boundary conditions. This is something I would be interested in pursuing, but due to time wanted to just get the ball rolling on getting a sink at the origin to check flow properties post formation..
Given the sparse documentation on the wiki to do this, I thought I'd better outline the procedure I learned about here, so it can be ported over to an actual wiki page later..
The hack I did has the following attributes:
- Initialize a fixed sink and associated point gravity object
- Kill any new checks for sinks so that as simulation progresses, no others will form (for the BE collapse case no other sinks SHOULD form anyway)
- Modify accretion so that new values of the mass is increased by factor of 8 to account for accretion from the reflecting ghost zones along x,y,z.
- Kill all motion from sink by removing particle momentum
As I outlined in previous blog posts on the wiki, amr_control.f90 calls various routines for checking for new sinks, initializing and advancing them, etc.
In particular the first pertinent routine it calls is particle_preupdate.. In this routine, there are loops that 1) check for new particles and 2) collects new particles. I commented both of these out and took some code from collects new particles and placed in my problem.f90.
Inside collects new particles, there are the following lines:
NULLIFY(NewParticle) CALL CreateParticle(NewParticle) NewParticle%xloc(1:nDim)=GlobalParticles(1:nDim,i) NewParticle%moments=GlobalParticles(nDim+1:,i) NewParticle%iAccrete=FEDERRATH_ACCRETION CALL AddSinkParticle(NewParticle) CALL CreatePointGravityObject(NewParticle%PointGravityObj) NewParticle%PointGravityObj%x0(1:nDim)=GlobalParticles(1:nDim,i) NewParticle%PointGravityObj%soft_length=1d0*sink_dx NewParticle%PointGravityObj%soft_function=SPLINESOFT
These lines initialize a new particle and associate with it a point gravity object. Now, there is a distinction between sink particles and point gravity objects. To my understanding, sinks can accrete mass and move, but point gravity objects can only exert a gravitational force. Thus when one wishes to accrete mass onto a gravitational point, one should initialize a sink + point gravity object. (Whereas if you only cared about exerting a force, but no need for accretion onto the particle — accumulation of matter in the surrounding cells would be fine — you could use just a point grav obj).
So, I added these lines to my problem.f90 in the routine problem module init, below the ambient and clump objects. To use these routines, I also added a USE Particle Declarations at the top of the file.
IF (.NOT.LRESTART) THEN
NULLIFY(Particle)
CALL CreateParticle(Particle)
Particle%lfixed=.true. !keeps particle fixed at xloc
Particle%xloc(1:ndim)=0d0
Particle%iAccrete=1 !Fedderath accretion, as given in particle def
Particle%moments(1:3)=0
!check print statement for moments
CALL AddSinkParticle(Particle) !accesses particle_
CALL CreatePointGravityObject(Particle%PointGravityObj)
!accesses pointgravitysrc
Particle%PointGravityObj%x0(1:ndim)=0
Particle%PointGravityObj%soft_length=1d0*sink_dx
!not sure about this
Particle%PointGravityObj%soft_function=SPLINESOFT
!default mass = 0 as defined in the type
! starts gaining mass when surrounding gas is jeans unstable for fedd accretion (should coincide with same time a sink formed in previous simulations..)
END IF
The grid is then initialized with this point particle at the origin. It lives there, massless and with 0 momentum throughout the simulation until it begins to accrete. I found that I did not have to explicitly 0 out the momentum though, as checks were already in place that did this for a fixed particle.
To modify the accretion, in finalize accretion routine, I multiplied dq*8:
Particle%Q(1:NrHydroVars)=Particle%Q(1:NrHydroVars)+Particle%dQ(1:NrHydroVars)*8d0
Now the sink will be accreting 8 octants worth of material
Now that the edits have been made, the overall structure of the code goes like this —
1) sink + point gravity obj is initialized on the grid
2) code goes through pre-update that no longer checks for new particles, but for the one particle in the particle list (added to it in the problem module) it does accretion, synchs accretion, and finalizes accretion. In the do accretion routine, the code checks whether the jeans criteria has been violated around the sink particle. If it has, then the accretion takes place. Fedderath accretion removes just the amount of gas from the cells to make the cells no longer jeans unstable, and adds that gas to the particle. This is the type of accretion I am using. Away from the sink, one expects a bondi flow to be reached..
3) Code Finalizes accretions where it zeroes out particle's momentum, and adds new values to the q array for accreted mass, energy (multiplied by the factor of 8).
And this all repeats for the next time step.
Sink lessons 2
Whoops.. pushed back arrow and just lost all of my post… grrrrrrr.
As I was saying before being so rudely interrupted…
I learned that currently a sink accretes from ghost zones by a) reducing mass in the ghost region, WITHOUT b) increasing mass of the sink. This is because of the possibility for "over counting" the accreted mass onto the sink if the ghost zone is an internal boundary, hence overlapping with actual physical cells. Q) Do internal ghost zones then get their mass information from overlaid physical cells?
If your sink was near a physical boundary, however, you'd like accretion to be a little different now — such that it both a) reduced mass in the ghost region and b) accumulated onto the sink, since issue with over counting is no longer valid.
Astrobear2.0 as of now does NOT make this distinction — should perhaps be added.
Now I also learned a bit more about the force algorithm — will be updated here soon..
Learning about sinks in astrobear
Working on understanding sink implementation in the code.
Here is a running dialogue on what I am learning and questions that I have.
The Problem -
1) Currently, the BC's for sinks seem to be set to extrapolating, which is why the sink can move off of the grid… and into the ghost region. Q) Can sinks in the ghost region interact with gas/sinks in the grid?
2) If made them reflecting, once sink would hit the boundary, it would stay at the boundary (since all momenta would be equal and opposite at that point). Q) What are potential negative effects to treating problem this way when there are multiple sinks?
3) How to modify accretion when the sink interacts with reflecting boundary?
Code -
1) amr_control.f90
In Line 266 of AMR_control.f90, in the subroutine LevelsInit, SinkParticleInit is called which lives in Particle_control.f90:
a) SinkParticleInit used for initializing a sink from a problem module — there is a warning about periodic BC's and potential accretion troubles here that I'd like to understand better.
Then on line 473, in Recursive Subroutine AMR(), call ParticlePreUpdate, which
a) clears particle momenta for level n (dmom is an (maxdepth (set=16 in globaldec) x 3 ) array defined in pointgravitydef, px, py, pz, b) If you're on the finest grid (n == maxlevel), then moments are calculated and synch'ed, c) check for new sinks and add any you find to the particle list, d) update accretion for the sinks.
Line 474 calls ApplyPhysicalBCs, then next is ellipticBCs Then we advancegrids (line 530)
Then we call ParticlePostUpdate, which
a) if lsinks, synch's gas forces for particles on each level, and advances particle if on finest level. if lparticles, and on finest level, calls UpdateParticleObjects. Q) What are these handles, lparticle and lsink?
ADVANCE PARTICLE SUBROUTINE -
a) begins by allocating arrays for velocity, position, mass, gas_accel, sink_accel, lfixed (not sure what this is) for each particle, b) next populates arrays from particle%Q(imom(i:ndim)), particle%xloc(1:ndim), etc. Q)How does imom array get filled? Considering adjacent cells? Boundary cells also?, c) next it updates the velocity array using sink_accel by using what appears to be a mean acceleration*dt — vel = vi + dv/dt*(½)*dt. How it computes sink_accel is the matter of GetGasAccel and CalcSinkAcc — so want to check these routines for the potential of boundary effects.
UPDATE PARTICLE OBJECTS -
In this subroutine (here on SR) in particle_control.f90, an associated particle's velocity, position, time, and mass is updated. What I think we are interested in modifying here is the particles velocity (momentum it seems, so why called v?), which as can be seen in this SR goes like:
Particle%PointGravityObj%v0(1:nDim)=Particle%q(imom(1:nDim)) — here, does the array imom contain momentum from ghost cells?
Questions -
1) How to get a center, stable, accreting sink in an octant simulation with AMR?
2) Does each sink have a surrounding mesh separate from the numerical domain?
3) Does a sink form first at a cell center, but then is unrestricted to r? Yes.
4) What are the order of steps? Hydro, source updates (here included are the sinks in particle list), elliptic? Why sink between hydro and elliptic? Where do pre and post update fit into this order?
5) What is n's range? N is the patch level. It runs from -2 to max level of AMR. -2 is the "container" for the info node, -1 is the domain (can have many domains in astrobear1.0, but not implemented in astrobear2.0), 0 is course, and 1 + correspond to the different amr levels. Info nodes -2 and -1 do not contain data structures.
Updated BE paper -- Finished Results/Figures
Here is a link to the most updated version of the BE paper with finished results section and figures/captions.
https://astrobear.pas.rochester.edu/trac/astrobear/wiki/u/u/erica/BEPaper
Meeting Update
Adiabatic BE sphere —
I revisited the Lane Emden derivation for isothermal sphere and then considered how it would change under adiabatic conditions, outlined in this blog post: https://astrobear.pas.rochester.edu/trac/astrobear/blog/erica02262013
Basically, making the sphere adiabatic does not change its initial configuration, and so no change to the problem module needs to be made.
Running the BE problem module out past sink formation —
I made a page on the results I saw with sink formation in the Matched and BP cases: https://astrobear.pas.rochester.edu/trac/astrobear/wiki/u/erica/BEMar1 This set up is going to need some work if we want to track post sink formation.
Paper —
I have been working on my paper, added some new text on the results section and began outlining discussion section
Coding —
I would like to revisit my Godunov solver this week and check the BC's. I began reading the next chapter on Euler equations with Godunov method. I believe my next step will be to use the Godunov method to solve Euler equations in 1-D. Next, I will adjust the code to be higher dimensional, then will begin learning about a Poisson solver and Hypre.
Movie —
We would like to begin shooting movie material this week. Here is an example of what the end product may look like — http://www.youtube.com/watch?v=GID2K3EiiI8
If we have the green light on script questions, Will would like to interview/videotape Jason TODAY (Have you guys been in touch?)
He would like to repeat some of the questions for everyone (such as, what does astrobear do?)
Adam, when would you be free to shoot your part?
Martin and I are free when this week to shoot supporting material?
Do we have an official PR release date set yet?
Science Outreach —
Meeting at East High this Friday again for an hour of Physics tutoring.
Adiabatic BE sphere
Thinking about the adiabatic BE sphere, I went through some calculations to check if we should change anything in the BE problem module. To begin, I refreshed my memory on deriving the Lane Emden equation, and then considered how the algebra would differ in an adiabatic case.
Isothermal case -
If we start with an ideal EOS,
and assume isothermality so we can use the isothermal sound speed,
we can rewrite the ideal gas equation to get,
Plugging this into the equation for HSE,
and integrating, gives
which plugging into
gives
Letting
allows one to scale the ODE into the following form called the Lane Emden equation:
Adiabatic case -
So, how does this change if we wish to use an adiabatic sound speed Cs, where
The ideal gas eqn when combined with the adiabatic sound speed is:
Plugging into HSE gives
implying,
This gives ~ the same ODE as above, but in order to scale it this time so that we get the same form as the Lane Emden equation above, one must make following modified variable substitutions:
Note the additional factor of gamma coming into xi.
Now the question is, how does this affect the BE module?
Well, the BE module reads in rhoc, r, xi, from which it computes an isothermal sound speed, aT, to get the temperature to set the pressure inside the clump.
The module uses the definition of xi above for the isothermal case to get aT:
NOTE that for the adiabatic case, the sound speed should have a factor of gamma coming in the numerator under the square root.
Now, considering the adiabatic sound speed,
gives
(The gamma factor cancels out anyway, leaving T unaffected as we should expect!!).
Results
- T should be the same in an adiabatic BE sphere cloud (at least initially).
- Xi, r, and rho_c are given in problem.data and together they fix the sound speed. Thus, the sound speed should be effectively different between the isothermal and adiabatic cases. However, as shown above, T is independent of gamma. Since T is used to set P, gamma doesn't effect the problem set up. Therefore, no changes should be made to the problem module.
- Further, rho= rho(xi). Since xi has gamma in it for the adiabatic case, the stability criteria for the adiabatic case is going to be different. It would no longer be the case that for an adiabatic BE sphere, xi_crit = 6.451. Considering dp/drho shows that a factor of gamma would come into the P given by the adiabatic ideal gas law.
Meeting update
I was out of town last week so don't have too much to share, but since we last met here are some updates:
Science Outreach - I met with Prof. Luehmann at East High where she introduced me to Bertha, a sophomore in the Science Stars. I got to tutor her for her AP Physics class that she is struggling with. I scheduled to meet with her again in 2 weeks. Will also be planning a trip to the Planetarium for the group and will keep details posted.
AstroBear Movie - We have been given permission to use the astrobear comic image as our logo Would we like to crop this image down at all to exclude "AstroBEAR and Kid" (http://nedroidcomics.livejournal.com/86373.html) ? In order to get the ball on the roll with making the PR video, can people send me or Baowei their schedules… What days/times are you unavailable during the week? I can forward these to our videographer to begin to schedule shooting times. Also, attached is the outline for the movies — we should decide on who will be interviewed for each section!
Updated Godunov page
Hey all, my result for my Godunov code is on this page —
https://astrobear.pas.rochester.edu/trac/astrobear/wiki/u/erica/GudonovMethod
It's not immediately obvious why the solution is falling off beyond x~1.2. The initial condition is plotted in red… Up until that point, the solution matches Toro's. Not sure if I should take the time to diagnosis this. Probably should move on, got the big picture here of a) the Godunov method, b) marching through the time evolution, c) how to test/write/debug/etc fortran code..
Meeting update
- Meeting with East High girls for outreach program on Friday morning
- Can't get a copy of Toro from printing center on campus
- Finished Godunov solver - this was for the 1D inviscid burger's equation. Just finishing up some tests of the code and will update my page on the results of this as soon as that's ready. (www.pas.rochester.edu/~erica/God.f90)
- Worked on writing results section — ~½ through (www.pas.rochester.edu/~erica/BEsphere.pdf)
- Next chapter in Toro is for Godunov method for systems of non-linear PDE's (Euler equations)
- Going out of town next week for some family things back in Philadelphia
Meeting update
- Met with April Luehmann — see blog here for details: https://clover.pas.rochester.edu/trac/astrobear/blog/erica01302013
- Talked with guys at CRC about making astrobear movie. There are 2 short movies we are thinking of doing: 1. Public Release of astrobear2.0, and 2. Short intro for new users. The first will require actors. The second a narrator.
- Finished intro and methods for paper. Just have a few misc equations to add: https://clover.pas.rochester.edu/trac/astrobear/wiki/u/u/erica/BEPaper
Astrobear movie
There are 2 movies we talked about making today - 1) Public release video, and 2) instructional video.
I took down notes and can explain the general gist of things on Monday morning.
Science Outreach Update
I just met with April Luehmann about science outreach. Here is a distilled version of what I learned:
- The university does not have a centralized structure, and so many departments are disconnected and not well communicated. Any push for an outreach program between departments are usually made and then dissolve once group members change or move on.
- No science outreach department in the University
- April is the PI on research for scientific teaching. She has a group of doctoral students of varied research focus: girls in science, underprivileged urban students, technology in the classroom, neuroscience and psychology, etc. all with the main objective of teaching science in revolutionary (not limited to traditional methods) ways to students that need help.
- In the summer there are 2 sessions she oversees: A) prepares masters students for teaching science, B) has masters students teach students; students are from East High school and Freedom school.
- In the Fall and Spring semesters, there is an after school club called the Science STARS (Students Tackling Authentic and Relevant Science) at East High. This is an NSF funded effort to get girls involved in science. Here is the page discussing the grant and program: http://www.rochester.edu/news/show.php?id=3941.
- Throughout the academic year, the STARS leaders engage/mentor/hang out with the students.
Where do we fit into this model? Well, instead of starting up our own science outreach program (time, money, lots of effort to get connected to local schools), we can piggy back off of April's program. This means potentially the following a) this semester taking the kids to planetarium or out star observing with a telescope (I volunteered to do this with April), b) in April/May, we as a larger group can organize with April mini-lessons with the students, c) I may participate in research meetings with April's team occasionally, d) I may attend a few of her lessons in the summer session A on science teaching (and could distill information to the rest of the group).
Meeting update
- Meeting with April Luehmann this week for coffee to discuss possibilities for science outreach. April is a Professor at the Warner School in science education. She does research in and guides science outreach programs.
- Toro can't be copied by the UR copy center due to copy rights. I am looking into printing off some copies from an online source.
- https://clover.pas.rochester.edu/trac/astrobear/wiki/Tutorials - Added page on aasTex tips
- From the Uni's IT desk, have the following options for film production services:
Steve Fasone and two part-time professional staff based out of the Med Center, Rate: $50 per hour. They charge for the duration of the event plus 1 hour to account for prep, setup and strike. A DVD camera original is included. Additional copies, editing (such as adding titles) add additional charges. They provide other services outside of production, including: editing; graphics; dvd & cd authoring; creating web-friendly assets; duplication; print graphics; packaging; captioning (in the form of DVD subtitles, open-captions as part of the video, or web captions).Contact: Steve Fasone at steve_fasone@urmc.rochester.edu or 585-275-2540
Undergraduate Film Council: Rate: $20-25 per hour, depending on the amount of equipment needed to service the event. Contact: sstewar9@u.rochester.edu
and, URTV: Base fee: URTVmail@gmail.com
25 per hour labor/equipment fee for additional services as needed. DVD production fee: $7 per DVD (please allow 2-3 weeks for processing). Contact:
- Working on my paper, made headway with embedding figures (previous post on how to do convert to .eps formats efficiently with the software I was using), and on my methods/results section: https://clover.pas.rochester.edu/trac/astrobear/wiki/u/u/erica/BEPaper
Saving a .eps image from libre calc in libre draw --
Found this in the forum: http://ask.libreoffice.org/en/question/37/how-do-i-export-a-chart-in-an-image-format-from/ and it was super duper helpful.
- In Calc, left-click once on the chart to select it.
- Right-click and select "Position and Size…"
- Write down the values of Width and Height for the chart.
- Open a new drawing in Draw (from Calc select File > New > Drawing).
- In Draw, select Format > Page…, and set all margins to zero. Then, in the Paper Format section, select "user" as Format, and set the page Width and Height to the values of the chart.
- Go to the Calc window, select the chart and copy it.
- Go back to the Draw window and paste the chart. It should occupy the whole page.
- Now you can export the drawing (File > Export…) to several different formats. You can even choose a vector format if that is the best option for later use.
Meeting update
I have updated the figures with colors and in units of sound crossing time for the various BE runs. Also, have been reading for ApJ submission stuff - found some helpful sources will post on the wiki later today. Couldn't find anything on naming sections on the official submission websites — any criterion here? Last week I read chapter 5 in Toro, too. https://clover.pas.rochester.edu/trac/astrobear/wiki/u/erica/BERunsFigures
The search for amino acids in space
If it can be here, does that implicate an affinity for connectedness and purpose? If amino acids here are the building blocks of DNA and hence life, do they possess a natural affinity for this elsewhere?
Meeting update
I began typing up the paper in Latex. This was a bit of a learning experience, but seems to be fine now- progress here: http://www.pas.rochester.edu/~erica/BEsphere
There were some things I learned about getting images in the correct form for an ApJ draft, and posted on a previous wiki.
Worked on New Users stuff, like organizing the User's Guide, etc. It still needs to be edited I think— it can be smoother and cleaner than it is already. Other things on my plate as far documentation: FAQ, Discussion Boards, Additional Physics.. Also - what is the New User's lesson plan - should they be directed to go through the user's guide in order (I think this is best).. Do we want to re-arrange the tree-structure in the wiki by renaming pages accordingly?
In other news, I am reading the Gudonov chapter in Toro now.
A new user's perspective
So after talking with Jake it seems like the following need improvement for the New User:
-A big picture summary of astrobear would be nice to start the absolute new user off (computational domain, cell-centered fluid variables, spatio/temporal solution of Euler equations, data files, refinement, objects, etc)
-Setting up the mesh manually in the "writing your module" page, i.e. coding up the domain and a shape to put within it manually
-Then doing the same thing but this time with the ambient object and shapes
Also, should the new user ALWAYS go through chs 1-4 sequentially? As for Jake, he skipped to Ch.4 - Writing your own module, and as such, I think it hurt his intuition and understanding of the code. I think it would be best for everyone to go through the basics of compiling, running, visualization, data files first before writing a module. Else, it could be really unclear what the module's purpose is. .
Schedule of topics for Christine's visualization lesson:
Have a BE sphere collapse run directory ready to go - here: grassdata/erica/NewUserLessons/Visualization/run_dir
Explain briefly what we are setting up, 3D BE sphere at (0,0,0) that is perturbed to collapse, 3 levels of refinement
Run the simulation
Open files in visit
Make a pseudocolor plot of rho, slice it
Show the mesh
Step through the time states
Add a title, and a time state
Make a movie
Lineout
Make a movie
Query options
Selection options (clip)
Query options with a clip
Expressions
Updated User's Guide under "New Users" -- Check it out!
The wiki is beginning to look better. Please refer to the updated New User's Guide to see if you like where things are going. I organized this page into a flow that seems most efficient and logical for a new user to follow. Note I have only really moved things around, and have not went through and checked/edited content of any of these pages. I will be meeting with Jake today to help him with module writing, and have asked him to note any problems with the module writing help page that he may find.
Would be nice to have this all spiffed up for Christine's upcoming visit, but seems like it will be a continual project of updating, adding, and editing information on these pages. Also, we may want to rename pages so to be more organized as Eddie and me were thinking about on the discussion forum. . .
Getting Images from Libre Calc to ApJ Latex Pre-print
I made plots for the BE paper in Libre Office Calc spreadsheets. ApJ pre-prints should include plots that are in .eps format. To save an image from Calc, one must do a work around:
- Make a copy of the plot, and paste in Libre Draw — Calc can't save images
- Format page so to eliminate margins
- Size your picture to match that of the page — if the margins aren't set to 0, there could be data cut off from the picture if the plot isn't correctly aligned to the page… frustrating :/
- Save your image as .eps, with size = size of picture
This image is now in the right format to be called in the latex file.
Making copies at UR copy center
One could drop a book off to Meliora Hall and they will send it over to the hospital's copy shop, where they could make copies and bind them for us.
Each double-sided page = 5cents (2.5 cents for a single sided page) Cover binding ranges between $2.5-4.5
For the relevant chapters in Toro - chapter's 1-8, I priced it out to be ~$18.00
THIS WEEK'S SCOOP
Over the break, I finished the exact Reimann solver program. The results are here: https://clover.pas.rochester.edu/trac/astrobear/wiki/u/erica/ExactRiemannSolver
Next I will be learning about approximate reimann solvers. I will begin reading Toro chapter 5 this week.
I also finished paper figures, here:
https://clover.pas.rochester.edu/trac/astrobear/wiki/u/erica/BERunsFigures
I had to adjust the old script to make the plots look better - sampling from smaller sized zones as the resolution increased over the sim. The script now samples shells of the same radius as the smallest dx on the grid at any time. This is much more accurate. The plots look pretty good, but the BP is still wonkey despite the radial averaging. I wonder if here I should just fit a smooth curve to the data?
The Intro for the paper is nearly done:
https://clover.pas.rochester.edu/trac/astrobear/wiki/u/u/erica/BEPaper
Would like to begin moving this wiki material into a latex pdf this week as well.
Exact Reimann Solver finished
I have finished the Exact Riemann Solver program and have tested it against Toro's and it works well. The code basically uses the different wave speeds in the grid to determine any position on the grid (x,t). This was a really neat concept. Using relative wave speeds, one can determine at which point in solution space you are, and thus the fluid variables at that point are given analytically. This is because there are essentially 5 possibilities - pre- or post- shock, ahead, behind, or within rarefaction fan. The waves spread out from the contact discontinuity - (although, not sure why this would be a strict condition from the equations) - and so the algorithm checks the relative pressures ahead and behind the waves to determine whether the wave is a shock or rarefaction. A shock has simple expressions for the post shock region, given by the jump conditions, and so the pre-shock region would just be the initial data state, and the post shock would be given by the jump condition. The post-shock region is aka "the star region" and its pressure and velocity are constant through-out. This region is where the initial discontinuity of the Reimann problem exists as well, later to travel through the grid as the contact discontinuity. What I am trying to get at is that behind any wave, the rho is given by either the jump condition or the isentropic condition for a rarefaction. This is a constant value until the contact is reached. Once the sampling point passes the contact, the same is true on the other side. So in this sense there are 4 regions - 1. Left state, 2. Left star region (with p, u given by the Reimann solver, and rho determined by the left state's post wave solution), 3. Right star state (same situation as 2, but with reference to right wave solution, 4. Right state.
Here is my progress page that links to my code and shows the results of the Toro tests: https://clover.pas.rochester.edu/trac/astrobear/wiki/u/erica/ExactRiemannSolver
The directory is here: grassdata/erica/ExactRiemannSolver
Meeting update
I updated my page with things currently on my plate, as well as pages for them:
https://clover.pas.rochester.edu/trac/astrobear/wiki/u/erica
Namely,
- I updated a BE paper figures page: https://clover.pas.rochester.edu/trac/astrobear/wiki/u/erica/BERunsFigures
- Made a coding page for the exact riemann solver I am studying: https://clover.pas.rochester.edu/trac/astrobear/wiki/u/erica/ExactRiemannSolver
- Made a page where I can publish drafts for my paper: https://clover.pas.rochester.edu/trac/astrobear/wiki/u/u/erica/BEPaper
The paper introduction is nearly done (not published on that page yet as I need to do some polishing of the text and my references to papers/results). I have begun writing a Fortran program for the Riemann solver following the outline I posted on the page above. I am writing it in the fashion of an astrobear module, for good practice. I also thought about plotting the B&P run. B&P normalized their data to unperturbed values, but I feel it is more clear to use perturbed values (which will both normalize the density to the perturbed central density, as well as change the truncation non-dimensional radius to beyond the critical xi). This will most clearly show the effect of perturbation - to increase xi beyond its critical value. Also, to compare the density to the unperturbed central density seems to be less intuitive as comparing to the perturbed central density. As far as normalizing the times in the plots, I derived a sound crossing time for the BE sphere, consisting of variables specific to the BE problem. I have normalized all the times to this time for my plots.
Meeting update
Last week I finished reading Toro ch.'s 1-4 and met with Adam on the first program I will be writing. Basically it will be setting up a so-called Exact Riemann solver, which will take 2 initial data states (left and right) and solve for the pressure of an intermediate state in the "star region". This pressure will be found using an iterative approach from an initial approximation to the actual pressure. Once this is found, the remaining fluid variables can be solved for once the types of waves on either side of the star region have been identified as either shock or rarefaction. This is because each type of non linear wave has a set of conditions that apply for the density and velocity across them (only pressure is necessarily constant across the middle wave - the contact discontinuity). With this in mind, once P is found, the program will follow a flow chart to determine which conditions must be met, and hence which equation or set of equations should be solved to find the remaining fluid variables of the problem.
I also began working on writing the introduction and thinking about redoing the plots for the paper so that the curves show the actual resolution (which increases over time of simulation). I am thinking that for the plots, I can adjust my program to use just a constant line out in the center region of the sphere (where I think the angular distribution of data was symmetric), and an averaging scheme over the outer shells. In this way, the data sampled from Visit should automatically adjust for smaller radii as more refinement is added later on in the simulation.
Here is what I have for the Intro so far:
- Various properties of the “Bonnor-Ebert” (BE) sphere, a hydrostatic sphere in pressure equilibrium with its ambient environment, make it a good candidate for numerical modeling of protostellar collapse. First, as a candidate star forming structure is envisaged as gravitationally bound and unstable, it is easy to imagine a protostar evolving from an initially hydrostatic configuration. Indeed, spherical clumps have been observed in or near hydrostatic equilibrium, such as the Bok Globule, B68 (Myers). Second, the stability criterion against gravitational collapse has been worked out analytically. Third, pushing the sphere out of the stability regime with various physical perturbations illuminate collapse characteristics. Such features of the collapse may help advance single star formation theory as well as provide clues to observational astronomers in identifying potential star forming sites.
- While the collapse of a BE sphere has been studied extensively over the years, the literature reveals studies of the BE sphere in precarious and unphysical conditions. The BE sphere has largely been modeled as residing in low density ambient mediums, so to seemingly isolate the collapse of the sphere from the environment. Examples of previous works - in response to Shu’s claim that all collapse would be inside out - the BE sphere does not collapse in this way…
- Although modeling the collapse in this way does illuminate unique features of the BE collapse, it inadvertently constrains the model. If the BE sphere might be considered as an early basic model of star formation, it should be examined in a more physically plausible setting. That is, the BE sphere should be tied to its parent cloud as it would be in nature, by being modeled with no discontinuous boundary between BE sphere and parent clump. The collapse properties of such a simulation would be more physically accurate to any actual situation involving the collapse of a protostellar BE sphere. Discuss those papers that began to address this - Myers and Hannebelle.
- How this paper is different from previous similar work.
Self Gravity papers to read? https://clover.pas.rochester.edu/trac/astrobear/wiki/ExternalLinks#Self-GravityCoreCollapsePapers https://clover.pas.rochester.edu/trac/astrobear/wiki/SelfGravity
Meeting update
Not too much to post on the blog this week as I have mostly been learning from Toro this past week, and spent some time working on BE paper stuff. I am in chapter 4 now of Toro, and it seems like I will be writing up the exact Riemann solver code this week.
Meeting 11/19/12
Last week was spent out of town and so not much to share. Today, focus has been on reading Toro. I would like to move through this task of reading and coding in the next month. Oh, and as far as holiday traveling, I will be out of town beginning tomorrow until next Tuesday, visiting relatives in St. Louis, Missouri.
Happy Holidays to everyone.
Titles for upgraded versions of AstroBEAR
Random thought of the day:
Maybe for new releases of the code, we can use different animal names, i.e.:
AstroLION, AstroCRANE, AstroSHARK,….
Meeting Update
I worked on writing scripts and making figures last week for the BE paper. Here is the page I put together of the results:
https://clover.pas.rochester.edu/trac/astrobear/wiki/u/erica/BERunsFigures
Added a page on adding tracers to objects, https://clover.pas.rochester.edu/trac/astrobear/wiki/CodeExplanation > linked on this page.
This week I have a research project and 30 minute talk to prepare for the astro course I am taking. I will be busy for the next few days working on that.
Next up on my plate will be getting the data for the sub crit case and making a similar figures page for that run, compiling references for the BE paper, writing the BE paper, reading Toro, writing my code, and working with Jonathan on the grid generating algorithm stuff.
Meeting update
This past week, I have been working on:
- Writing script to radially average quantities in visit. It seems Visit subcycles (see a previous post with images), so it made sense to write a script in Python for Visit to read to do the radial averaging as opposed to astrobear, as a-bear doesn't currently subcycle. I wrote this script (commented on previous post) and am attaching it here for any one to use. I made some preliminary plots just to check whether or not we are happy with the simulation output, now that we can see what it looks like when it is radially averaged:
- Read Ch.2 Toro
- Looking into debugging mesh-asymmetries with Jonathan. Will write up what I learned so far in the next day or so.
- Played with tracers, would like to ask someone to help with this, as I am not sure if I am setting them properly.
Script for averaging over successive shells in visit
Here is a script I wrote in python for averaging over successive shells in Visit, writing to a file, then plotting that file as a .curve in Visit:
What this script does is the following:
- Begins with a small sphere (or in my simulations - an octant of a sphere), of radius ~2*dx, where dx=length of smallest cell. The sphere is averaged for the quantity of interest - here rho, and this value, along with the radius, is printed to a file.
- The operators are removed, and then added again in a for-loop that adjusts the radius of successive shells to clip and then average. Inverse = 0 means clip (or in other words, get rid of) data within the clip object, whereas inverse = 1 clips data outside of the clip object.
- The for-loop iterates over 48 steps, increasing each radius over which to average by ~dx. The for-loop ends at about 1.5 Rbe.
- Average-value data written to a .curve file is then plotted and visualized in the Visit window.
Things I learned during this process:
You can record steps you take in making a plot in the GUI, and the GUI can then translate these steps into Python language. I did this for some of the more complicated plot attributes.
The plot needs to be drawn before Visit can query; when you add operators, you must also draw plots in order to query.
Python is pretty cool.
Meeting
I have the 5 runs now:
- BP
- Light
- Matched
- 3 in between Light and Matched, log spaced
I am sorry to not have movies posted by tomorrow but I am having issues with Visit over my remote connection. I will make them tomorrow after the meeting.
10/11/12
I was very excited to get the new data and begin analysing it. First, I wanted to check out the mesh of the octant simulation. This is what I found:
From this 3d plot, we see that the mesh is squarish in the y-z plane, but roundish on the other 2 faces of the cube. Depending on which face I chose to make plots of, the mesh will be slightly different (and the way the fluid flows will be slightly different as well):
A pseudo-color plot of the density can be found here:
As you can see from this movie, the ambient is very poorly resolved. I wonder about the consequences of this. The fluid variables for each cell are calculated by considering gradients along cell-cell interfaces. If the cell is course, as in the ambient, then one value of density is assigned to that course cell, whereas if that one cell was split into many little cells, there may be more structure than is being seen at a lower resolution. Now, the structure may be missing here, but is there a sufficient loss of information? The collapse seems to be (largely) spherical, so the loss of information would be in the radial direction. Is this right? Is it like the density is being "averaged" so-to-speak over a larger volume = volume of coarse cell..?
I tried to get more refinement in the ambient, so I removed the de-refinement object from my problem module (last post). I expected refinement based on a) jeans in the ambient, and b) density gradient in the sphere. The grids were initialized as expected. However, later in the simulation, BH killed the simulation, reporting a seg. fault. A look at the last frame revealed massive refinement throughout the grid:
The difference in the refinement calls between this and yesterday's post, is in this most current set-up, I had astrobear refine on Jeans length.
10/10's directory is /alfalfadata/erica/DEBUG/Matched_From_Bluehive 10/11's is /grassdata/erica/from_bluehive/MatchedMeshMadness
I'd like to figure out what is the refinement being triggered from?
10/10/2012
The new set of simulations for the matched case has been a bit difficult getting off the ground. First, I was unable to get frame 0, due to refinement to highest level being triggered in ambient, ostensibly due to the default gradient in phi refinement criteria. By adding clearAllRefinements to grid, I was able to run the sim.
This past week I learned why the simulation was then getting to frame 13 and slowing down A LOT. It was because a much larger filling fraction of the amr mesh was being triggered due to density gradients forming in the ambient medium:
Now, this triggered refinement may be due to boundary effects, and thus the gradients are extraneous information that isn't important. Or, they may be physically meaningful, and something we may want to actually resolve. If this is the case, then I will have to see about moving to much more processors/memory to carry out the calculation further. For now, the fast method to get data is to de-refine outside of the BE radius.
What I'd like to accomplish with my refinement: -derefining outside of the sphere on everything -refining inside of the sphere on 2 things: jeans and density gradients
-The code I used was:
CALL AddRefinementCriterion(Mass_Field) CALL AddRefinementThreshold(JeansLength_Field,LESSTHAN, (/(4d0*levels(i)%dx, i=0, MaxLevel)/)) CALL CreateRefinement(Refinement) CALL CreateShape(Refinement%Shape) Refinement%Shape%size_param=(/1d0, 1d0, 1d0/) CALL SetShapeBounds(Refinement%Shape) Refinement%BufferCells=4 Refinement%tolerance=-1d0
I noticed 2 things of interest:
1) When I do not add the de-refine lines (from the 3rd call, on), the entire ambient was being refined to level 1, whereas the max level was being reached only inside of the sphere:
This seems to make sense if that is being triggered due to Jeans' criteria, which I am asking the code to use with the lines:
CALL AddRefinementCriterion(Mass_Field) CALL AddRefinementThreshold(JeansLength_Field,LESSTHAN, (/(4d0*levels(i)%dx, i=0, MaxLevel)/))
A quick calculation shows that, yes, the Jeans length is < 4*dx in the ambient, so this makes sense:
2) When I add the de-refine outside of the BE sphere, the ambient is coarse, and only the sphere is refined maximally:
I am happy with this refinement for now and am running in the AFRANK queue now. The higher levels of refinement inside of the BE sphere should be restricted to Jeans, by adjusting the qtolerance to have low tolerance for levs 0-5, and high levels for 6-7.
Questions
- How does cells/jeans work with ref subroutine?
- How does refVarFactor “ “ “ “
Approximating m(t) onto the BE sphere
Did a calculation today to try to get a handle on the mass falling onto the BE sphere in the light ambient case. That is written up here (also accessible from my page):
https://clover.pas.rochester.edu/trac/astrobear/wiki/u/massBElight
BE module write - up
Worked on writing up a description on initializing a BE sphere on the grid:
https://clover.pas.rochester.edu/trac/astrobear/wiki/u/BonnorEbertModule
Hope to add tomorrow some information on the actual BE_module in physics folder of a-bear and information on controlling the ambient medium inside problem.f90.
Nice reference for Latex writing on wiki
So one does not need anymore to flank either side of the latex formula.
http://en.wikipedia.org/wiki/Help:Displaying_a_formula
From what I have gathered today, it seems you just need to do:
< {{{#!latex formula stuff here }}} >
When you are trying to group terms together, use the curly bracket {}.
A study of the gravitational potential of my BE sphere and the surround ambient
Yesterday, in lieu of anamolous fatality of a-bear running my matched ambient case, I began some detective work on the possible cause.
Comparing the light and matched cases in visit, I didn't notice any glaring discrepancies in rho, but did see drastic differences in Phi. So, I set-out to try and better understand if these were actually un-expected.
I did some basic calculations of phi yesterday and made some plots. Today I was able to make a nice wiki page describing my methods and findings.
At the bottom of the page, you will see that phi calculated is what phi was simulated. That is a nice result.
https://clover.pas.rochester.edu/trac/astrobear/wiki/u/GasPhiBE
Meeting update
- Confirmed BE sphere in a light medium is stable for 5 crossing times:
https://clover.pas.rochester.edu/trac/astrobear/blog/erica09252012
- Running tests on Matched ambient case. I am getting a seg-fault when running on 16 processors in Debug queue and some other quarkiness. Will work on today or tomorrow and post ticket if need be.
- Wrote outline for BE paper, and thought about the next steps for the project. To talk to Adam about later today.
- Read and took notes on Toro, ch. 1.
Refining based on Jeans Length
I have been stepping through using the new refinement objects in the BE problem module. I added to the module both density gradient and Jeans threshold criteria for triggering refinement. I did so by following the steps on the page, RefinementObjects, and by just adding 2 calls sequentially in problem.f90.
The way Jeans based refinement is: each cell has associated with it a local Jeans length = Cs*Sqrt(Pi/G*rho). The refinement criteria for JeansLength is that if a cell's local JeansLength < 4*dx on a given level, than that cell is marked for refinement.
In my global.data file I set mx=16,16,16 and levels=7 with gtolerance=.10,.10,.10,.10,.10,1e30,1e30,.10 in an attempt to force the first 5 levels to be based on density gradients alone. If the collapse proceeded to make central cells jeans unstable, then the refinement based on threshold would happen on levels 6 and 7.
For the light ambient case, the refinement criteria worked well:
There was only ever 5 levels of refinement on the grid, Jeans threshold was not reached.
However, when checking the other case of a Matched ambient medium, the code would freeze or cause a segfault. Hmmm…. The only difference was the value of the ambient rho. Since the code was freezing at grid 3 initialization, I had a feeling the entire mesh was being refined. Since I was simply raising the ambient density by a constant everywhere, it did not make sense this would be due to density gradients. So I computed the Jeans' length for ambient cells in Visit: (<SoundSpeedScaled>*sqrt(pi/(ScaleGrav*rho))) = 3.648 in ambient. Compare now to 4*dx=4*(15.4/16)=3.85, ah, Jeanslength is slightly smaller. So, sure, there is refinement expected in the ambient.
The interesting thing I noticed next was that, contrary to what I'd expect, I was unable to de-trigger this refinement. I added more cells to the root level, which would mean smaller dx, and so no Jeans refinement right? Well, that did not happen. Not even with 22 cells! So. Why freezing at level 3? I restarted a sim with 2 AMR levels, and checked the output. Sure enough, the ambient was refined, and not just to the 1st level, but to the max level. No wonder the code was causing a segfault on BH with 7 levels! So. A movie of the Jean's length and the mesh than revealed the next interesting thing:
http://www.pas.rochester.edu/~erica/weirdness1.gif
This ring is maybe due to density falling in to the sphere in a compression wave? Well, it wasn't, rho was constant out there. But Jeans is a function of Cs, and Temp as well, and so a movie of that revealed this:
http://www.pas.rochester.edu/~erica/2weirdness.gif
Yikes.
I am not making a ticket yet, as I am not entirely sure if this is a problem. For one, the color bar indicates that the temp is indeed constant in the ambient, as it should be for this 'isothermal' sim., so maybe this is a visualization artifact? And 2, I may have been mistaken on how the new grids are laid down for the threshold based refinement. Is it setup to refine maximally when the Jeans Threshold is reached?
Stability movie of the crit BE sphere for 5 crossing times
The ambient is light, 1/100 of rho(Rbe).
http://www.pas.rochester.edu/~erica/stabilityCheck.gif
This was just an octant simulation, using reflecting boundary conditions with self-gravity
Here is a movie of the lineout for rho:
http://www.pas.rochester.edu/~erica/lineoutStabilityCheck.gif
Meeting update
Last week was largely slow due to various issues with Grass. I worked with Rich a lot to get Grass up and going again. Of the few bugs we fixed, we are now running Visit 2.5 which is not stalling anymore. Not all of the kinks are worked out, but at least grass is functioning now.
I compiled a reading list for myself of a few textbooks and papers last week and began reading. I am pretty excited about these 3 papers I found on adsabs:
- Foster, Prudence N.; Boss, Alan P., Triggering Star Formation with Stellar Ejecta: "We examine inducing the self-gravitational collapse of molecular cloud cores with stellar ejecta. We study the effect of winds of various strengths arriving at cloud cores modeled as marginally stable Bonnor-Ebert spheres, which are unstable both to collapse and to expansion. We find that some winds instigate collapse of the cloud core, while others result in expansion or destruction of the cloud. Collapse occurs when the incident momentum of the ejecta is greater than approximately 0.1 MMsun0 km s-1 for the standard γ = 1 wind and 1 Msun cloud scenario. The critical momentum, which divides those cases which induce collapse and those which do not, scales as the mass of the cloud times its sound speed, which is 0.2 MMsun0 km s-1 for the standard to K cloud. The critical momentum is exceeded for some supernova and many protostellar outflows, although if the wind has a velocity greater than approximately 100 km s-1, the effective adiabatic index will be γ = 5/3 and the cloud will be destroyed, through shredding into many pieces. The planetary nebulae of AGB stars appear to have momenta below the critical value. However, we found that a high wind temperature (T ˜600 K), possibly characteristic of AGB star winds, could instigate collapse even in low momentum winds." http://adsabs.harvard.edu/abs/1996ApJ...468..784F
- "Formation and collapse of quiescent cloud cores induced by dynamic compressions" Gómez, Gilberto C.; Vázquez-Semadeni, Enrique; Shadmehri, Mohsen; Ballesteros-Paredes, Javier, http://adsabs.harvard.edu/abs/2007ApJ...669.1042G
- "Radiation hydrodynamics of triggered star formation: the effect of the diffuse radiation field". Haworth, Thomas J.; Harries, Tim J., http://adsabs.harvard.edu/abs/2012MNRAS.420..562H, "We investigate the effect of including diffuse field radiation when modelling the radiatively driven implosion of a Bonnor-Ebert sphere (BES). Radiation-hydrodynamical calculations are performed by using operator splitting to combine Monte Carlo photoionization with grid-based Eulerian hydrodynamics that includes self-gravity. It is found that the diffuse field has a significant effect on the nature of radiatively driven collapse which is strongly coupled to the strength of the driving shock that is established before impacting the BES. This can result in either slower or more rapid star formation than expected using the on-the-spot approximation depending on the distance of the BES from the source object. As well as directly compressing the BES, stronger shocks increase the thickness and density in the shell of accumulated material, which leads to short, strong, photoevaporative ejections that reinforce the compression whenever it slows. This happens particularly effectively when the diffuse field is included as rocket motion is induced over a larger area of the shell surface. The formation and evolution of 'elephant trunks' via instability is also found to vary significantly when the diffuse field is included. Since the perturbations that seed instabilities are smeared out elephant trunks form less readily and, once formed, are exposed to enhanced thermal compression."
Last week got code up and running on grass and BH. I re-ran a stability check on the B&P sphere for 5 crossing times. I wasn't able to make a movie over my remote connection, for an error I don't yet understand:
-visit is waiting for a cli to launch on local host VisIt could not connect to the new client /usr/local/alt/visit2_5_0.linux-x86_64/bin/visit
But, the sphere did appear to be stable. It oscillated about its equilibrium, although it expanded much more than condensed. Not sure if this is significant or unexpected…
Next, wanted to tidy up the runs I worked on over the summer, by adding more sophisticated refinement criteria, and running out until a sink formed. I learned about the refinement objects and added the 2 criteria to my problem module: 1) refine on density gradients, and 2) refine on Jean's threshold. I didn't see any apparent conflict in the code that would prevent multiple flags to be called… is this the case?
When I tried to run the B&P matched case, however, the code froze on grass after initialing only the first 3/5 grids. It just stopped printing lines, although top showed it was still computing… I tried it in the debug queue on BH, but it seg-faulted. Not entirely sure why changing the ambient density from very light to heavy would change the computation, other than the Jean's refinement may be triggered and the entire domain is being refined. The way my data file was set up though, I thought would restrict Jeans based refinement to happen only for higher levels. . .
Lastly, I wanted to check before running large sims if restarts with self-gravity are working okay with the golden version. It seems though, that there is a bug with them (posted a ticket). This makes long jobs a little risky to run. .
updated movies
New movies:
Here are larger resolved runs ~ 2x clump radius:
Matched case: http://www.pas.rochester.edu/~erica/MatchedJuly26.gif
Light case (1/1000th ambient rho): http://www.pas.rochester.edu/~erica/NewLight1000thJuly28.gif
Going into a hole for this upcoming month, studying for prelim. See you on the other side !
Meeting update
I have done 5 new runs, all with BE sphere in a box that is ~ 30 BE radii across. The runs are as follows:
A light case with rho_ambient=0.001rho(Rbe):
http://www.pas.rochester.edu/~erica/Ambient1000thVradJuly23.gif
Here I am seeing the sphere very stable for ~4.5 crossing times, but then it starts pretty fast, and I am not sure what the cause of this would be.. I see that a velocity wave is penetrating inside of the sphere, and it seems like it is tied to the expansion of the sphere at the end. Could this be numerical diffusion at the boundary of the sphere, leaking velocity inward? Since the mass of the ambient cells is so slight, wouldn't expect any momentum to be transferred inward. .
A BE sphere with a 10% density enhancement (Bannerjee and Pudritz setup):
http://www.pas.rochester.edu/~erica/BPJuly22.gif
This is classic outside in collapse of the BE sphere, markedly different than the matched and intermediate cases.
A BE sphere in an ambient with density that matches the sphere's outer edge:
http://www.pas.rochester.edu/~erica/allOnOneJuly19.gif
And 3 intermediate densities between rho matched and rho ambient = 0.01 rho(Rbe), in order of densest ambient to lightest:
http://www.pas.rochester.edu/~erica/Light105July22.gif http://www.pas.rochester.edu/~erica/Light70July22.gif http://www.pas.rochester.edu/~erica/Light35July22.gif
As a remark on the stable case above, it is definitely stable within the time scales of the other simulations..
Meeting update
Here are updated movies of my "matched" ambient case:
Note dx = 0.03, and cs=0.4 km/s:
Pressure: http://www.pas.rochester.edu/~erica/pressureMovieJuly15.gif
Rho: http://www.pas.rochester.edu/~erica/rhoMovieJuly15.gif
Vrad: http://www.pas.rochester.edu/~erica/vRadMovieJuly15.gif
Here is a pic of the mesh:
http://www.pas.rochester.edu/~erica/MeshZoomed0001.png
I think I am having issues with my restarts. I just ran a bigger sized mesh of this problem on BH and after the restart there I am again seeing a jump in vrad at the boundaries. Also, there appears to be no inflow at the corners of my box. . I am using the winds module to step on the velocity at the boundary. .
Before restart: http://www.pas.rochester.edu/~erica/vRadRestart0001.png
After restart: http://www.pas.rochester.edu/~erica/vRadRestart0000.png
Meeting
I have been focusing on the prelim the last week, so no real updates on research stuff. I am attempting to refine my simulation with the use of tracers. It was pretty straightforward to assign a tracer in my simulation, but am still playing with how to control the refinement using the tracer I put inside my clump.
Meeting update
This past week I have:
*Read the Hannebelle paper (very good/pertinent to what I am doing now).
*Added Poisson BC parameters under the tutorials>data files explained>global.data
And worked on the following sims:
*Reflecting quandrant of sphere:
I learned more about the role of BCs in fluid simulations. That is, how they control the fluid variables across boundaries. For instance, when slicing through a symmetric object, the boundaries on the faces of the slice should be reflecting so the flow at that boundary is the same, as it would be by symmetry. I also learned more about why using extrapolating BCs at a boundary for subsonic flow may lead to issues — it has to do with the ghost region copying data from adjacent domain cells. If the fluid variables copied are pointing in the 'wrong' direction, which may happen from random thermal motions in a subsonic flow, they will be pasted inside the neighboring domain cell and subsequently the flow will be flowing in the wrong direction inside this boundary. I learned that the winds module when used will set any inflow to be 0, but will allow outflow. With these in mind, I set the domain BC's to be reflecting on the 3 faces of the sliced sphere, and used the winds module on the other sides (periodic was NOT an option, as you need a pair of boundaries for periodic to work!). As far as the elliptic boundaries go, I am using reflecting as well on the same 3 sides, and multipole everywhere else. I am unsure of how to proceed from here, given that sim. is currently stuck (see ticket - https://clover.pas.rochester.edu/trac/astrobear/ticket/228)
*Larger box size, higher refinement inside the BE sphere:
I learned why refinement was not being triggered in my matched ambient simulation (see ticket 223). After making an adjustment to my problem module, am now running this set up on the larger domain (30.83) with 6 levels of refinement. This is refining the clump to about 33 cells per radius. I am running the sim out to t=0.2, which is about 2x longer than the time it took for Pthermal>Pcrit, and with 200 frames. This will give us a temporal resolution ~ 10 x greater than before so we can see more clearly the cause of the anomalous "bounce" that concerned us before.
I am also rerunning the Light ambient case at the same resolution, 303 + 6 levels.
Both of these sims are using the winds module at the domain boundary, and multipole for the elliptic.
Concerning refinement of the matched case. If we are interesting in refining the clump down to 10% its original size with 30 cells p/ radius, 7 levels of refinement would be sufficient. However, this seems a HIGHLY inefficient way to refine as I am currently refining now — with a geometric condition. A much better way would be to use Jean's length again, but to figure out how to restrict the Jeans refinement to be only within the sphere (not the Jeans unstable ambient medium). Will talk to Jonathan about this.
The matched ambient case is more than half way done now (external pressure on the sphere's edge should exceed threshold by now), and the light is ~ 1/10 the way. Hope to post figures/movies with the results by the meeting tomorrow — at least what I'll have so far by then.
Meeting update
I have updated my BE page with movies of the lineouts for rho and vrad of the matched case, where there seemed to be a bounce (see earlier blog for this reference as well), and also have added comparison plots to the earlier B&P collapse study. Both of these additions can be found here: https://clover.pas.rochester.edu/trac/astrobear/wiki/u/BonnorEbertMatched#Discussion.
I am also playing with getting the sphere to be in a bigger box by simulating just an octant of the original run and working through some calculations of Bondi accretion.
Movies of my lineouts
In the group meeting we noticed the density of the sphere of the matched case drop below the previous times when the density wave was moving into the sphere by 3.7 myrs (t=0.176 in computational units). We wondered if this was due to compression wave bouncing off of the dense core forming in the center. It turns out that a sink particle had not formed by 3.76 myrs… Also there does not seem to be a concurrent outward wave of velocity appearing that would seem to occur with a bounce…
Movies of the lineout in log space of density and radial velocity for the BE sphere in the matched condition:
https://clover.pas.rochester.edu/trac/astrobear/wiki/u/BonnorEbertMatched/LineoutMoviesMatched
Meeting update
I've been studying the effects of a heavier medium on the collapse of a BE sphere. The sphere was found to be stable to collapse in a light medium (1/100 the density of the sphere's outer edge), but collapsed fairly quickly in the case where the density matched the sphere's outer edge. The infalling material increased the density, and hence pressure (sim. is isothermal so they are directly proportional) at the sphere's outer edge. Since the sphere was a critical sphere, any increase in Pext(Rbe), sends the sphere out of equilibrium. Thus it seems a rise in thermal P alone (not strictly Ram presure) was sufficient in inducing collapse.
My new project page can be found here:
https://clover.pas.rochester.edu/trac/astrobear/wiki/u/BonnorEbertMatched
meeting update
I have been reading loads of great papers and trying to absorb all the equations, methods, etc. I am currently working on putting together a written summary of what I have been learning which will lead into the research project I have been pondering. I plan on working on that for tonight and maybe tomorrow. As I have posted, my figure for my BE collapse is not quite the same as the B&P run at later times. Qualitatively it looks correct. I am thinking about equations that would be relevant in determining a predicted core density for the collapse. This has not been clearly presented in the literature. After my summary, maybe tomorrow or next, I want to work through some calculations to let me better understand my B&P output. This is the first step I'd like to take— fully understanding the B&P setup I have run— both relevant collapse calculations and the numerics. Likely, this should have been the other way— equations first, sim second
In other news, I will be moving to BH ASAP to run calculations. I have been running a sim on bamboo for days now (sorry guys)to check the density I have gotten from my restarts (when it begins to deviate from B&P's results), just as a sanity check. I hear new revision is out so will try compiling it on BH tonight.
I'd like to calculate the expected time on my simulation to compare with how long it has been taking in bamboo. I think I remember there is a toy calculation somewhere on the wiki on how to do this..
Lastly, I think the B&P set up is slightly different than my run. Not outlined in the section I have been focusing (the isothermal collapse section), but in previous sections, B&P says all runs have angular velocity, and on top of 10% over density there is another density perturbation given by the azimuthal angle. I would like to a) determine an expected density for MY collapse setup and compare to my results and b) determine how the rotation would be expected to alter the density profiles I have found. This seems a cleaner way to compare my results, rather than re run the sim but with the rotation. .
new plots
Radial velocity matches B&P, however density is off by a factor of ~4 at later times… Am checking my calculations will post follow up soon..
Awesome pic on home page
Image of the day looks killer. What is the sim of? It looks like a tornado.
Meeting Check-in
Hi all, I am running into some resistance from astrobear in finishing up my simulation of the Bannerjee set up. I posted a ticket that describes the sort of trouble I running into. Well, the good news is the code does appear to be running the problem accurately. Up until frame 4/6, everything looks pretty spot on. The spike at the edge of the sphere in the radial velocity plot is likely attributable to poor resolution of the sphere's edge. I bet that spike would average out to 0 in a radially averaged plot. I would like to rerun the sim with ambient rho = sphere's outer edge rho, but am curious to find out what has been happening with my sim before progressing. Any thoughts, greatly appreciated. Thanks E
Meeting update
I don't have that much to post currently as coursework is particularly heavy this week.. :/ I did confirm that periodic boundaries reduced the flow of ambient gas of my sphere set up. I am waiting on the output from my high res run as of now — running with the periodic boundaries and at the correct resolution. Will post the results soon.
Meeting Update
Here are the results of my sim that was set up with same params as Bannerjee's:
Density: http://www.pas.rochester.edu/~erica/BannerjeeRunRho.png
Radial Velocity: http://www.pas.rochester.edu/~erica/BannerjeeRun_Vrad.png
Bannerjee paper link:
Meeting
This week is really hectic with my stat mech midterm and other homework assignments. I am running the Bannerjee set up now though, but will likely not be able to post plots until after my exam on Friday..
Run is located at:
/alfalfadata/erica/PROJECTS/BE_stability/12FEB2012/Bannerjee_stuff
Params:
- R = 1.62 PC
- Rho_central = 3.35*10(-21) (g/cc)
- Xi = 6.5
- Box size = (25 pc)3
- Collapse initiated by reducing thermal energy of clump in problem.f90
- Close or exact matches of Bannerjee's reported params (Rho_c, R, Xi, isothermal sound speed, Mass clump), except my Temp seems to be 2times greater than Bannerjee's — 40K instead of 20K. Have a hunch it may do with the fact that I am using a Xmu = 2.. Should this be something to rectify before comparing results?
Meeting update
Here are the Bannerjee plots for my sub-critical sphere (xi=4) that collapsed when the ambient medium was in pressure equilibrium with the edge of the sphere and at the same density. Entire box is isothermal.
Here is density: http://www.pas.rochester.edu/~erica/density.png
Here is radial velocity: http://www.pas.rochester.edu/~erica/radialVel.png
The scaled times from the Bannerjee paper run from ~3.7-7, so my time samples are slightly different. The radial velocity trend looks nice. As for the density, it seems that the ambient medium falls inward and piles up at the edge of the sphere at earlier times. By the last time curve, the curve has the right shape as the curves in Bannerjee's paper. One big difference I notice is that with my run the ambient seems to have set up some sort of equilibrium with the sphere. In contrast, in Bannerjee's sim the ambient stays uniformly flat throughout the collapse. I see that in that run, the ambient is 100 times lighter than that sphere's outer edge and the box's boundaries are very far from the sphere. There are other differences between the simulations as well, such as there the sphere was a critical sphere with a 10% density enhancement to initialize the collapse. Also, in my simul, a sink particle formed between t/to = 3.84 and 5.12. Is it no longer correct to compare sims after sink formation?
What next steps should be taken?
Digging through some literature on the physics of embedded clouds..
Stability of Bok Globules, theory and numerics:
http://adsabs.harvard.edu/abs/2007PhDT........19R
Isothermal spheres embedded in homogenous medium: http://adsabs.harvard.edu/abs/1986A%26A...165....1U http://adsabs.harvard.edu/abs/1980A%26A....83...95S http://adsabs.harvard.edu/abs/2011MNRAS.414.2728Y http://adsabs.harvard.edu/abs/1968MNRAS.140..109Y http://adsabs.harvard.edu/abs/1970MNRAS.151...81H http://adsabs.harvard.edu/abs/2000PASJ...52..217H http://adsabs.harvard.edu/abs/2004astro.ph..2021K http://adsabs.harvard.edu/abs/1996ApJ...469..194M http://adsabs.harvard.edu/abs/1979A%26AS...38..295C http://adsabs.harvard.edu/abs/1980MNRAS.190..497K http://adsabs.harvard.edu/abs/2000prpl.conf...59A
Virial Thm analysis:
http://adsabs.harvard.edu/abs/2007IAUS..237..410D
Numerics: http://adsabs.harvard.edu/abs/1984Ap%26SS.100..447J http://adsabs.harvard.edu/abs/1990ASSL..162..333B
Stability in rotating isothermal spheres: http://adsabs.harvard.edu/abs/1989ApJ...341..220T http://adsabs.harvard.edu/abs/1988PThPS..96...50K http://adsabs.harvard.edu/abs/1984ApJ...286..529T http://adsabs.harvard.edu/abs/2007arXiv0706.3205C http://adsabs.harvard.edu/abs/1985Ap%26SS.113..125C http://adsabs.harvard.edu/abs/1988PThPS..96...50K
MHD stability analysis: http://adsabs.harvard.edu/abs/1982PASAu...4..371P http://adsabs.harvard.edu/abs/1997RoAJ....7..143B http://adsabs.harvard.edu/abs/1976ApJ...207..141M http://adsabs.harvard.edu/abs/2005MNRAS.359.1083G
Stability Self Gravity: http://adsabs.harvard.edu/abs/2002A%26A...386..732C http://adsabs.harvard.edu/abs/2003astro.ph..7532F
Equation of state: http://adsabs.harvard.edu/abs/1986Ap%26SS.124..295M
Boyle's law and Gravitational instability: http://adsabs.harvard.edu/abs/1956MNRAS.116..351B http://adsabs.harvard.edu/abs/2001A%26A...375.1091L
Grav. Collapse of isothermal sphere: http://adsabs.harvard.edu/abs/1992AAS...181.5801F http://adsabs.harvard.edu/abs/1977ApJ...218..834H
Additional physics: http://adsabs.harvard.edu/abs/2001PhRvD..63h4005S http://adsabs.harvard.edu/abs/2011arXiv1105.5128J http://adsabs.harvard.edu/abs/2011A%26A...535A..49S http://adsabs.harvard.edu/abs/1996hell.conf..191B http://adsabs.harvard.edu/abs/1992Ap%26SS.193..173Y http://adsabs.harvard.edu/abs/1975MNRAS.172..441Y http://adsabs.harvard.edu/abs/1975MNRAS.171...85Y http://adsabs.harvard.edu/abs/1968MNRAS.140..109Y http://adsabs.harvard.edu/abs/1958MNRAS.118..523B http://adsabs.harvard.edu/abs/1984ApJ...276..737S http://adsabs.harvard.edu/abs/1988MNRAS.234..821O
update
Ran a series of tests on the BE module. I found that I can get a stable sphere when running on my old data files, with the current version of astrobear I had been using; there was an issue with my data files. I then found I could get a stable sphere when running my current data files but at a lower resolution and smaller box size. This narrowed down the change in the data files to resolution/box size causing the instability. I am now trying to rule out what causes the instability (i.e. a larger box or a higher resolution or the combo of the the 2..?). It seems that perhaps the larger box in which the sphere sits may be throwing it out of equilibrium? Perhaps the BE solution can withstand only so much material outside of the sphere that is out of equilibrium…
Also, of possible interest.. in the runs that were collapsing (but that shouldn't have been…) The set up was initially in hydrostatic equilibrium, but seemed to remain in equilibrium throughout the collapse… ?? I made the vector plots of acceleration of (-gradP-gradPhi)/rho and saw vectors in the ambient pointing inwards (the gravitational acceleration), but inside the sphere all vectors canceled out — i.e. the forces balanced out inside the sphere. This behavior remained the same as the collapse happened. This is contrary to what I would think should happen. In the event of collapse, the inward gravitational acc vectors should dominate inside of the sphere. Something to perhaps look into at another time..
Sorry, likely won't be able to post movies or figures this week with homework, but will try to have a concrete diagnosis of what went wrong before…
Meeting update Vday
3 runs:
Test for stability - ambient 10 x lighter:
Radial velocity lineout: http://www.pas.rochester.edu/~erica/rhoweight10_vrad.gif
Log density lineout: http://www.pas.rochester.edu/~erica/rhoweight10_rhoLineout.gif
Log density: http://www.pas.rochester.edu/~erica/rhoweight10_rho.gif
Pressure: http://www.pas.rochester.edu/~erica/rhoweight10_pressure.gif
Test for stability - ambient 100 x lighter:
Radial velocity lineout: http://www.pas.rochester.edu/~erica/rhoweight100_vrad.gif
Log density lineout: http://www.pas.rochester.edu/~erica/rhoweight100_rhoLineout.gif
Log density: http://www.pas.rochester.edu/~erica/rhoweight100_rho.gif
Pressure: http://www.pas.rochester.edu/~erica/rhoweight100_pressure.gif
Test of Collapse:
Radial velocity lineout: http://www.pas.rochester.edu/~erica/rhoweight1vrad.gif
Log Density Lineout: http://www.pas.rochester.edu/~erica/rhoweight1rhoLineout.gif
Log Density: http://www.pas.rochester.edu/~erica/rhoweight1.gif
Meeting update
Here are the movies of my 2D BE run:
http://www.pas.rochester.edu/~erica/rhoFeb62012.gif
http://www.pas.rochester.edu/~erica/rhoLineoutFeb62012.gif
http://www.pas.rochester.edu/~erica/vRadLineoutFeb62012.gif
In the process of getting 3D runs posted. Currently a job is in Bluehive queue.
2D Bonnor Ebert Circle for quick and dirty runs ready
The output is at /alfalfadata/erica/PROJECTS/BEsphere2D/out and the executable was made from: /alfalfadata/erica/DEVEL/BEsphere2D. Mainly the modification in the code had to do with moving the clump object off of the default center position of the box (0,0,0). Adding the clump param, xloc, to the problem module allows one to read in the location of the clump. I set it to be at 2,2,2. To make sure the bounds in global.data were set such that the sphere was sliced down the middle, the domain was set to run from (0,0,2) to (4,4,2). That is a slice in the xy plane at z=2, eg. a slice down the center of the sphere at xloc = 2,2,2.
Odd, higher res BE movie
Here are movies of a higher resolution BE run:
http://www.pas.rochester.edu/~erica/upperboundBE.gif — upper bound on color bar
http://www.pas.rochester.edu/~erica/noboundBE.gif — no bound on color bar
http://www.pas.rochester.edu/~erica/reg1182012.gif — initial bounds on color bar
- 863 root grid +3 levels
- RefineVariableFactor set to 0 to trigger refinement based on Jeans length
- Ambient medium density = density at outer edge of sphere, entire domain is isothermal
- boundary conditions = all velocities into and out of grid are stepped on
- Poisson BC's = multipole
I killed the run after the diamond shape happened. We have seen this before and contributed it to the pressure waves running off corners of box. However, the disconcerting asymmetry at the last frame is new..
I am going to move the edges of box further away from the sphere and change the boundary conditions to periodic to mimic the sphere sitting in a homogenous, 'infinite' medium. .
========================================================
863 base grid + 5 levels, periodic BC's on box:
- Larger domain: http://www.pas.rochester.edu/~erica/largerBE1172012.gif
Meeting update 1/17/2012
Particle Kicks: http://www.pas.rochester.edu/~erica/SinksUpdate1172012.html Both gas-gas and particle-gas conservation of momentum have been implemented. The kick that was present before is no longer present.
Bonnor Ebert: Working on simulations. Would like to think more about the system I am trying to model and the model I am trying to create for that system. I would like to choose appropriate boundary conditions and wonder whether I should be initializing a stable BE sphere or an unstable one when I run the tests I emailed Phil about.
Here is a heavy run: http://www.pas.rochester.edu/~erica/BEupdate1172012.html. I am getting HIGH cfls and the totals.dat file is wonky..
Working on getting a 2D version of the BE problem up and running.
Meeting update
Happy New Year all.
I am working on setting up new BE stuff to begin a collaboration with Prof. Phil Myers - reading his latest papers and reviewing old studies. I am also checking on the status of the conservation of momentum added to the sink algorithms.
Erica
P.O.I:
http://www.pas.rochester.edu/~erica/SinksUpdate.html http://www.pas.rochester.edu/~erica/comparison.gif
New rotation movie-
New movie posted of the rotating collapse from a side-on perspective -
http://www.pas.rochester.edu/~erica/sinks.html
Also — modification of the fflags in astrobear 2 happens in Makefile.Inc — use -g and - traceback flags for debugging.
Rotating simul update
Check out my rotating BE sphere~!
meeting update 11.29.11
I have been debugging this past week. I ran into some issues with linked libraries (or lack of) on bluehive for the updated version of the code I was running. After fixing that, there were some apparent problems with the simulation I was running. I tracked down those 'bugs' (some code had been left out for the initialization of ambient objects from the newer version of the code I pulled) and have since submitted the jobs to bluehive. The queue is a bit laggy right now, but hope to finish the prelim sink tests and beta rotation study by the end of the day tomorrow, given all goes smoothly with the jobs in the queue now..
Added Banerjee & Pudritz to my library.
Banerjee and Pudritz employed an EOS that was density and time dependent. They also incorporated radiative cooling into their model and studied both the isothermal and non-isothermal phases of the collapse of rotating BE spheres. As this is already much more complicated than our current rotating BE clumps, not sure how much of an overlap in results we can expect. Furthermore, they report seeing no fragmentation or 'non-central' collapse in the isothermal phase.. :/. (Currently the setup is only isothermal).
https://clover.pas.rochester.edu/trac/astrobear/wiki/u/EricasLibrary
So I hope this isn't too silly, but, on an aside, if one does not initialize differential rotation, can the simulation relax to such a state on its own? Is this possible without the equation for such rotation explicitly written into astrobear? Or, do all numerical models operate on "if this condition, use this equation, for this q variable"? ..
Scratch notes on repo info. for sink debugging and rotation tests
'astrobear_mercurial_test' @ bluehive is where current build of astrobear was made
The repository here is a blend of the most recent developer source and johannjc's changeset with correction to the *Subroutine Finalize Accretions*.
I have modified the files here to include write statements in particle_control.f90 L261 for the sink particle's location, dQ array, and d(rmass). Also, I have added a write statement in the particle_info_ops.f90, L519 to report the gradient of phi.
In addition, I have modified the BE problem.f90 to induce collapse and omega rotation parameter.
Is the next step for mercurial to check in these changes and then merge?
Two jobs are in the queue @ bluehive:
1) I am re-running the 863 fixed, shifted ½ dx in x and y with the new print statements to plot the changes in the above params over time.
2) Rotation of BE sphere.
Notes on the sinks
Today I talked with Jonathan about the sink implementation. Here are some notes on what I learned.
The calculation of forces are broken down into 4 types —
1) gas-on-gas (source/selfgravity.f90), 2) particle-on-gas (source/pointgravity.f90), 3) gas on particle (particle/paritcle_advance.f90), and 4) particle-on-particle (particle/particle_advance.f90)
Sinks have a 'point gravity' source term, and there is usually softening factors involved in calculating the gravity force (to avoid nans when the particle is at the center of a cell). The force is calculated from the gradient of the potential from the neighboring four cells. This can be seen in source_control.f90.
Once the forces are calculated, the motion of the particle is advanced. This is done by a 'leap frog' method, and is in particle_info_ops.f90. The 2 subroutines that control the particles motion/position are 'advance particles', and the 'accrete subroutine'.
In diagnosing and updating the sinks, it is best to begin by degugging the current algorithms. Then we will decide what parts of the alogorithm work and don't. Next, we will choose to add an additional accretion procedure (Bondi-Hoyle) to the code.
Meeting update -- Rotation
The product of initial central angular velocity with the free fall time, as measured from the central density of the clump — omega*tff, can be used as a parameter for the rotation problem. Authors Matsumoto & Hanawa ('03) have found that when this product is low enough to allow collapse, but larger than ~ 0.05 (when a disk can form before first core formation), the clump fragments into a particular morphology. When the BE sphere is critical, with a small density "enhancement", SLOW rotation induces collapse.
These authors initialize angular velocity with the following:
This perturbs the amplitude of the global rotation,
with varying amplitudes of Omega2 and Omega3, the m=2 and m=3 velocity perturbations. The parameter C determines the angular velocity's dependance on r. For C=0, rotation is rigid-body, for C larger, Omega decreases more rapidly with increasing r. Both C and beta, the ratio of rotational to gravitational energy strongly determine the resulting morphology of collapse — the fragmentation into bars, discs, etc.
The authors Banerjee, Pudritz, Holmes on the other hand study the collapse of both high mass and low mass BE spheres. There initial conditions prescribe a rigid body rotation that is also determined by the product Omega*tff. They use this parameter = 0.1, 0.2, 0.3. They go through the calculation of both alpha (thermal/gravitational energy) and beta (rotational/gravitational energy) of BE spheres and explain that beta is maximum at the OUTER edge of the cloud.
My questions are —
1) Rigid body vs. keplerian rotation? What conditions for each rotation? If we start out with rigid body, do we expect the collapse to evolve to a keplerian rotation?
2) Why does slow rotation trigger collapse rather than expansion or nothing? How does alpha and beta change with r and contribute to the total stability of the clump? How does the speed of rotation effect stability?
3) A good way to visualize the simulation output?
4) Initial conditions for the BE sphere — should the set-up be one that is JUST critical, but doesn't collapse? Or should it be a set-up that is already super-critical (thermal energy reduced in grid by 10%)?
Current Run:
Is in the queue in BH
It has:
-An omega*tff = 0.1 (rigid body rotation)
-A critical BE sphere (stable at the critical radius)
-Grid is shifted in x,y by ½ dx
-43+1level cells — ~ 20 zones p/clump radius
Results of recent sink tests
A shift in the xy plane by 1/10th dx DID NOT form a sink (not in AMR, not in fixed). Evidently, in order to form a sink the potential minima in the grid must be well defined — slightly shifting the grid by 1/10 dx is NOT large enough to force the minima to be in a single center cell.
In contrast, shifting the grid by ½ dx in x and y successfully restricted the minima to reside in a single cell, at the center of the grid (in x and y). Though, it was along a face in the z plane. Since the potential was symmetric along x and y, a sink was able to form. However, since the potential was asymmetric along z, a kick happened.
The above tests were done using an even number of cells — 86 in each direction. They were prompted by the fact that a cell formed a sink with 85 cubed fixed, but not when using AMR with a similar eff. resolution. This indicated that sink formation rested on whether the potential was centered in the middle of the grid, located in a single cell, as it was in 853.
Meeting Update
I have largely been focusing on course work this week as I am getting ready for midterms. For the group — sink runs are in bluehive for shifting the grid in y and z by 1/10 * dx to check if the sink still moves. I'll post the results when they are up.
Meeting update -- week of 10/10
Finished table on SINK Tests.
Meeting update
Here is a page of the tests I re-ran and ran on the sink particle formation:
http://www.pas.rochester.edu/~erica/SINKtests.html.
Note, a sink only formed on a fixed grid, of an ODD number of cells. I was unable to form a sink with an even number of cells (and thus also AMR). The complaints were "system unbound".
Also, this sink DID NOT WANDER.
This may be due to it being in the 'true' center of the grid.
In other news, here is the paper by Bannerjee and Pudritz on rotating BE spheres:
Erica's meeting update
I didn't find much literature on a rotating Bonnor Ebert sphere, per se, but some pioneering works on rotating clouds may be helpful:
http://adsabs.harvard.edu/abs/1982ApJ...255..240B
http://adsabs.harvard.edu/abs/1976ApJ...206..138B
http://adsabs.harvard.edu/abs/1972MNRAS.156..437L
As well as some presentations that talk about rotating BE spheres but with additional physics:
Sink tests we talked about on Friday are in the queue. Only asking for 32 cores for these jobs, but maybe there is a backup in the queue due to the in-accessibility of Bluehive yesterday??
This meeting's update
I ran the following fixed grid tests on the sinks to learn more about why the formed particle is wandering:
- Fixed grid, same set up as before — to see if this problem happens when AMR is shut off.
- Originally, my simulation was a 203 grid with 2 levels of AMR (this formed a sink). I ran a fixed grid then with 803 and this did NOT form a sink. The system was reported as not bound, which I saw before was fixed by increasing the resolution of the grid. I ran an 853 simul., and this DID form a sink. Further, this sink did NOT move out of its 'parent' cell. See the page here - http://www.pas.rochester.edu/~erica/collapse9192011.html. I wondered if this was a result of fixed grid alone, or the higher effective resolution. When I tried to run an AMR simulation with ~ 853 eff. cells — using 213 + 2 levels, no sink was formed. Given this is a higher res. run than the first I quoted here, this is curious. The differences between the simulation as far as I can see are 1) the refinement criteria in the first called for the inner most region of the sphere ONLY to be resolved, 2) global cell numbers was off — in the first mxGlobal ran to 25 instead of 20. It doesn't seem like this should make a difference, as long as the effective resolution is ≥ the old simulations — a sink should form..
I am wondering if I should still do this test, since a sink formed in fixed grid, even number of cells? — Fixed grid, odd number of cells — since a sink is formed in the center of a cell, see if restricting the cell to the true center of the grid resolves the issue.
Enlarging the accretion radius from 4 to 8 and 16. This may help with smoothing out the potential.
Checking if increasing the resolution stops the particle from having a kick will also be useful.
Additionally, I added my radial velocity plots to the BE page under the results section and compared to the Foster and Chevalier results in the discussion section. I think this page is more organized and nice since the last edits.
—Also, I noticed a sink formed in the fixed grid simulation earlier — frame 13, as opposed to frame 29 as in the AMR simulation. I am running a simulation now that has the same geometric refinement criteria I originally formed a sink with.. to see if this helps in forming a sink in the 21+2 levels case above.
—Question - I see coordinates in my sink file, but there is no values in the bottom array… What does this mean? The cell at the position violated the True Love criteria (density threshold), but did not officially form?
Meeting Update 9/13/11 - Erica
I have put together a summary of the Foster and Chevalier paper on my summaries page - https://clover.pas.rochester.edu/trac/astrobear/wiki/u/EricasLibrary#no2, and bulleted the results I'd like compare in my BE tests.
I have made some line plots of the radial velocity. The low resolution in the outer region of the sphere created a jagged appearance, which will make a fine comparison to the Foster and Chevalier difficult. I will work on the tests we mentioned last week in the coming days which may lead to better plots. Here is a page of the plots so far: http://www.pas.rochester.edu/~erica/new_plots_BE.html.
Update week September 5th
- Sink particle formed in my BE test:
http://www.pas.rochester.edu/~erica/bonner_5Sept11.html
- Updated New Users page with some info I learned about sink implementation
https://clover.pas.rochester.edu/trac/astrobear/wiki/u/NewUsers
- Added summaries to my library
https://clover.pas.rochester.edu/trac/astrobear/wiki/u/EricasLibrary
- Added commented problem.f90 to BE project page.
https://clover.pas.rochester.edu/trac/astrobear/wiki/BonnerEbert
meetings for next week?
Hi all — I am finally feeling better, and would like to catch up on things. Can someone please post the meeting schedule for this upcoming week? Are we moving the locals meeting to another day since Monday is Labor Day? Since I missed the New Users meeting, could someone kindly review the AMR talk at the next one for me please
Erica
Update on BE problem
- The initialization of the Bonner Ebert sphere looks as expected.
- Decreasing the thermal energy in the grid leads to collapse.
- Reading the Federrath et al. paper on sink particle implementation now. One of the tests they did was the collapse of a singular isothermal sphere (this is cool). They found the collapse to match Shu's solution, and the accretion rate of material onto the sink also matched Shu's solution for M-dot.
- Would like to finish up this summer's numerical study of the collapse of the BE sphere by relaxing the sinks criteria so as to form a sink during the collapse.
AMR control update
New AMR tests. I am trying to implement a few lines of code in ProblemSetErrFlags routine in problem.f90 to refine only within some center region of my Bonner Ebert sphere. Here is what I found:
Meeting update -- 8/15/2011
- Visit use on bluehive up on tutorial page
- Bonner collapse update - http://www.pas.rochester.edu/~erica/Collapse.html
Mass loss in Bonner Ebert simulation?
I wanted to make sure that the bonner ebert sphere would behave as expected under conditions of extreme gravitational instability corresponding to values of the density contrast >> 14.1. Under such conditions, I expected the sphere to collapse to very high densities within a few free fall times. This however, did not happen. What seemed to have happened is concerning. It looks like there is a global mass loss over the grid. Please see the movies in the following link. This sphere has a density contrast ~ 200. I looked again at a sphere that should have been unstable at about rho_o/rho_c ~ 20-30, but was not. This global dip in density also occurred there.
http://www.pas.rochester.edu/~erica/bonnerWeirdness.html
Also — NO sinks were formed ..
My calculation of mass and mBe:
-volume of ambient = volume of box - volume of sphere
!= 64 - 4/3 Pi
!=60
-mass of ambient = rho_ambient * volume ambient
!= 0.0004 * 60
!= 0.024 (mass - computational units)
-mass of sphere = query result for 'weighted variable sum' of rho (of entire box) - mass of ambient
!= 0.466 (mass - computational units)
To get into cgs, multiply by (rscale/lscale3)?
This gives: 1.57E-73..
I then compared this to mBe:
mBe = [1.18 * (aT)4] / [P½ * G3/2] My simulation gave a 9690 cm/s sound speed (aT) and a pressure of 2.87E-19.
These answers mismatch by an enormous factor, some 50 decades…
AMR control
8/10/11
Today I set all qtolerances to be a minimum of 1d-16 after observing the slight asymmetry of the mesh was elliminated when switching from 0.1 to 1e-16:
http://www.pas.rochester.edu/~erica/vvec2.html
I also wanted to maximize the refinement on my grid.
Next I tried playing with the refine variable option for density to see how it would decrease the refinement in my grid. I ran simulations of 1, .1, .0001, 1e-64.
refVar = 1 expected no change — this was right. But as I decreased the value of refVar, I didn't see an obvious decrease in refinement… Once I got to 1e-64, the grid had lost all refinement except for the center 8 cells that refined to level 1. (I was running level = 2).
I also tried refVar = 0d0, but can't seem to get rid of that center region of refinement like I expected.
I tried adjusting the fill ratios from constant 0.9 to 0.3. This created a square region of refinement — Still can't get center region of refinement to go away with refVar = 0. Ditto for fill ratio of 1..
http://www.pas.rochester.edu/~erica/vvec2.html
I looked in the clump and ambient object modules, nothing seems to be overriding the setflags routines there…
8/9/11
Working on understanding the amr controls in astrobear.
Meeting Update - Week of August 8th
- New Users Meeting Page, https://clover.pas.rochester.edu/trac/astrobear/wiki/u/NewUsers
- BE project page, https://clover.pas.rochester.edu/trac/astrobear/wiki/BonnerEbert
- Research blog page? http://astrobearcub.blogspot.com/
- Library page, https://clover.pas.rochester.edu/trac/astrobear/wiki/u/EricasLibrary
- Can use visit on bluehive. Added some notes on how to do so on the visit tutorials page. Should use Bluehive's compute nodes instead of the login nodes…
BE Update
http://www.pas.rochester.edu/~erica/vvec.html
I am running into some odd behavior with my bonner ebert simulations. The simulations run smoothly for about 3 crossing times. During this time there is a density increase of about 500x and the mass is squeezed into a tiny region at the grid center. I have noticed that the mass here is unevenly distributed (I.e., more so along one dimension than the others). Then the entire clump expands, fast, but in a squashed geometry. It seems like as the simulation progresses, however, the clump tends again toward a spherical geometry.
I am curious why this squashing along one dimension is happening. In earlier simulations, I saw distortion caused by pressure waves running off the sides of the cube and interacting with the center sphere. There, however, the geometry remained symmetric, as it should… I am dubious as to why I am seeing this asymmetry appear.. Perhaps the volume into which the material is collapsing is smaller than the grid can resolve, causing some wonky numerical behavior? I noticed a diamond shape occuring at the center of the sphere in the slices of density contours, leading me to believe this could be a matter of resolution..But alas, why would one of the squares in the diamond contain more mass than the others?? Why would this mass not be symmetrically distributed?
ALSO — I noticed that running AMR with the same effective resolution took ~5x longer…
I am running a 63x63x63 job now, with 3 levels AMR..
Putting together a library of summaries
As I collect and read through the literature, I thought it would be good for my later reference, as well as potentially for others, to put together a listing of what I learn. I thought it would have a good home on the wiki, so it can easily be made public to others. You can find this on my page, under the projects heading.
https://clover.pas.rochester.edu/trac/astrobear/wiki/u/EricasLibrary
BE update
Here is pressure http://www.pas.rochester.edu/~erica/Pressure_07182011.gif
Here is density http://www.pas.rochester.edu/~erica/Density_07182011.gif
Weekly Update- Week of July 18th
I have been working on initiating collapse of the Bonner Ebert Sphere. Why does a BE sphere that has no change across its boundary in density, pressure, and temperature, collapse under self gravity, WHILE a BE sphere that has a 20x greater external pressure does not? Is astrobear decreasing the density by a similar factor in the ambient to maintain an isothermal state…
NO change to external medium from the inside of the sphere. (Pout = (rho_out/rscale)*(iso_t/tempscale), rhoOut=rho_out/rscale). This causes collapse, as we saw before, because the medium is more massive than the clump. We rectified this by dividing rhoOut by a weight that decreased the ambient density.
Now, would expect, by leaving the external density the same, and INCREASING the external pressure further, to also see collapse. I am unsure of why this is not happening. The external pressure (Pout) is now about 20x greater.
Meeting update - week of July 11
I have created a page for the bonner ebert project. This can be found on my page https://clover.pas.rochester.edu/trac/astrobear/wiki/u/erica, or the projects page, https://clover.pas.rochester.edu/trac/astrobear/wiki/AstroBearProjects.
Here is a direct link:
https://clover.pas.rochester.edu/trac/astrobear/wiki/BonnerEbert
Adding a new module to the astrobear framework
I wanted to document the process of inserting a new module (i.e. one that is not yet built into the astrobear architecture) into the makefile, and thought this would be a good place to do so.
In the astrobear main directory, open the makefile (lower case m). This lists the dependencies of everything in a hierarchy of files, from those that are more "global" to those that are not.
Look for the obj_files = list. It is there that you should insert your module object file name. It should be listed after the dependencies it uses, but before the problem module (or any other module for that matter) that calls it. Be sure to use the same syntax of other elements in the list, namely end it with ".o/". Once this is done, you are ready to make.
BE update
Hi all. I tried stepping on the boundaries using nWinds taken from the true love problem.f90 to see if that fixed the noise from the previous movie:
This, I think, looks worse… I am attaching the data files I am using as well as the problem module. I see that the word "chombo" that indicates the current frame, shrinks when going from frame 0 to 1. I thought this may be an indication that the frame was old, or they are from a mismatched data set. But, this is not the case. . .
I am currently running the simul with my original boundary conditions to make sure I can duplicate the previous run. I am also attempting to run with periodic boundary conditions everywhere… but this is reporting a looong string of high cfl warnings.. From the first frames it produced, though, 0, 1, and 2, everything looks good..
So here is a quick comparison of the 2 different simuls (my first run, described in prev. post, and the new one with winds turned on)…
Scaling for BE
My equations are cast in CGS. Their output then must be divided by appropriate scales before being read into the q-arrays.
For the BE problem I am following the table in this presentation for some parameter values —
http://astro1.physics.utoledo.edu/~megeath/ph6820/lecture6_ph6820.pdf
The input vars are central density (10d3 - 10d4 g/cc), xi (nondimensional radius), and r (~1 pc). The crossing time is ~5myr.
Referring to pg. 244 of Stahler, xi should be chosen by the p/rho_central<(1/14.1) criterion. This looks to correspond to a xi less then 5…
In physics.data I am setting rscale, lscale, and … TempScale???
Scaling
So it seems, the units of the scales read in by astrobear and defined as "rscale" and the like have pre-set unit(s). I.e. rho scale (rscale) is in g/cc. lscale in cm., etc.
These scales are divided into your problem variables (that have the same units of the scales) to create scale-less quantities called computational units (c.u.) that are compatible with astrobear.
It is useful to assign a value to these scales that will result in a small computational unit after division.
This means, choose physically appropriate scales, here is an lscale that might be relevant to your problem:
lscale = 7.4d15 !cm
Here, 1 c.u. (computational unit) of length is equivalent to 7.4 x 1015 cm.
So if you have a clump say that has a length on the order of this scale, you can divide it's length by this scale to get something close to 1 that astrobear uses.
If you want to use units other than CGS, you may personalize them in problem.data, as long as you convert them to CGS before dividing by the scales (remember the goal is non-dimensionalization) The flow of info. should go something like:
vars in problem.data → read into problem.f90 → populate q-arrays with var/varScale
Put a circle on the grid
Hi everyone. I put a circle on the grid:
I learned a bit about the global.data file that I think I will share here (unless there might be a better place to do so)…
So in global.data, Gmx controls the number of cells on the grid. This must be in agreement with Domain%mGlobal, that is, mGlobal is a domain-by-domain decomposition of the number of cells per domain. It must run from AT LEAST 1 to the max number of cells in each dimension (i.e. mx). I ran into issues when mGlobal did not match mx. If I increased the number of cells in my domain, with out a concomitant increase in mGlobal, the cells were squeezed into a smaller and smaller region… Both mGlobal and mx refer to indices of the cells. They therefore cannot be real or floating point numbers; they must be integers. Also, they can't have values below 1 (a negative or 0 value for a cell index does not make sense). On the other hand, there is GxBounds. This refers to the actual spatial coords of the grid. These can run from negative to positive numbers. To convert between the cell indices and their spatial location, I wrote this into my module:
Do k=1-zrmbc, mz+zrmbc
Do j=1-rmbc, my+rmbc
Do i=1-rmbc, mx+rmbc
x=(xlower + (REAL(i, xPrec)-half)*dx) y=(ylower + (REAL(j, xPrec)-half)*dx) z=(zlower + (REAL(k, xPrec)-half)*dx)
Visit does not produce a grid with negative values. So if you have your domain running from -10 to 10, Visit will produce a grid from 0 to 20. This is relevant to keep in mind when visualizing output.
Okay, lastly, when one writes the grid init subroutine in their problem module, it is useful to assign an alias for certain variables that are assigned elsewhere in the astrobear "net". For instance:
mx=Info%mx(1); xlower=Info%Xbounds(1,1) my=Info%mx(2); ylower=Info%Xbounds(2,1) mz=Info%mx(3); zlower=Info%Xbounds(3,1)
Here we are calling a local variable, mx, and giving it the value Info%mx(1). Info%mx(1), however, is populated in another module, receiving its value from global.data — gmx. mGlobal is discrete, cell-centered indices. xBounds are contiguous, spatial coords.
Test
Test (redundant, I know..)