# Meeting Update 0930

- TSF: Mach 3.5 finished. I have queued up the Mach 1.5 with Federath accretion + parallel rotation high res run on kraken. I have been writing up the simulation results section by describing the 2D column density images. The discussion section will be a bit tricky (how long will this paper be? if we use the format as previous papers: introduction → simulation results → discussion of line plots → math model, I can see it easily go over 10 pages).

- RT: Now we can confirm that the ODE generated initial condition in SI units is not stable. The next step would be first to retrieve the computational unit test results: conduction front problem as described in Reale et al (paper attached), and RT (nonablative) growth rate problem. If those still works, then we can confirm there's something wrong in the scaling. We should then use computational unit data to generate ODE initial condition and test it to bypass whatever scaling the code does.

- Been polishing up the resume and statement of purpose. The nearest deadline of fellowships comes 15th this month, I want to get them ready by this week. Major tech companies' internship application for next summer are open now, I'm getting prepared to that too.

# Meeting Update 09/30/2013 - Eddie

### !D Rad Shocks

Redid the original 18 shock models for both x = 0.01 and x = 0.99. The low ionization models did not change that much, but improved a bit. The high ionization models improved a lot, but the temperature comparison is still a little off.

Here is an example of one of the models…

### 2.5D Pulsed Jets

Working on getting more references together and reading more. Fiddled around with latex this morning to get a bibliography file working, so now I can easily add all my references to the paper as I gather more. I am currently going through the PPVI review paper, and some papers by Tesileanu.

Trying to get the paper figures going, but visit has been really slow on our local machines. I thought it was the AMR in my data, but I turned the top 3 levels off and it is still slow. This might just take some more time and patience. The density image and movie look pretty cool though: ehansen09292013.

Did some analytic work on the magnetic and pressure forces on the clumps inside the jet beam: ehansen09262013. Still trying to figure out if this numerical analysis can be made more analytic. The problem is that there does not seem to be any place where I can make an approximation, so the equations are pretty ugly. Furthermore, knowing the force is one thing, but how do I use that to derive something that can be calculated from my simulations?

### 3D Pulsed Jets

I still have a run sitting in the queue on Bluestreak. It is a fairly large job, so that is why the wait has been so long. There is not much urgency here, but hopefully it will run this week.

The scaling tests from Stampede looked good. Thanks Baowei!

# Conduction Front Analytic test

The self-similar solution of the temperature of a thermal conduction front (without hydro) is described by the following set of equations:

where

here

Also notice that the above equations only give the profile left to

I wrote a quick thrown together executable that will return

./cond 0.001 100.0 0.1

would return "at time 0.1, Tc is 36.2443, x0 is 1.68733".

# Meeting Update 09/30/2013 -- Baowei

- Tickets
- new: #308(MolecularCloudFormation error when increasing flow size)
- closed: none

- Resources
- Martin's Allocation expired. I tried to burn the left on stampede with 3D Pulsed Jets.

- Worked on:
- Scaling Test and Progress Report
- Ablative RT: http://astrobear.pas.rochester.edu/trac/astrobear/blog/bliu09272013
- 3D Pulsed Jets on Stampede: obtain 7 frames with 21X105X21 + 5 AMR hydro in 2 hours and 4 frames with 21X105X21 + 6 AMR hydro in 4 hours
- AstroBEAR1.0 with SESAME table?

# Characterizing the Forces on the Clumps in Pulsed Jets

In this problem, I am considering a magnetic pinching force caused by the toroidal field and the gas pressure force. We have:

The jet starts out in magnetohydrodynamic equilibrium so that the magnetic force and gas pressure force cancel out (F = 0). When the jet material gets shocked at an internal working surface, the magnetic field increases and the gas pressure increases. They increase such that the previous equilibrium is no longer preserved. The clump will expand if F > 0, and if F < 0 the clump will compress.

I used the MHD shock jump conditions to find the resulting F inside a clump. The jump conditions actually depend on radius in this problem. For this reason, it is extremely difficult to write an analytic expression for F. I decided to study F numerically.

Based on the above equations, you would expect to get clump compression as the magnetic field is increased. The first time I ran the numbers, I could not get F < 0. I knew this had to be wrong, so I started playing around with my initial parameters. I found that the sign of F was highly sensitive to the strength of the shock (typically characterized by the mach number M).

In general, it seems that low M shocks are easier to compress. Once the mach number goes below 1.0, the material is no longer shocked, and equilibrium can be reestablished (F = 0).

Below are a few plots for the three cases of beta that I'm running in my simulations (5, 1, 0.4).

The way I like to think of these plots is that the relative velocity plots will tell you about clump evolution. The mach number plots can tell you more general things about field strengths and shock strengths. Several things that we can tell from these plots:

- The clumps are only in the compression regime at certain relative velocities. Typically, smaller relative velocities (aka weaker shocks). This means that the clumps will be more susceptible to compression at later times.

- The inner regions of the clump are more susceptible to compression first. This means that as the relative velocity decreases, the inner regions of the clump will start compressing first. We could call this "inside-out" compression.

- At some point during the compression phase, the outer regions begin to be compressed with greater force than the inner regions. This seems counter to point 2. If the entire clump enters the compression regime quickly, then the "inside-out" compression might look more like "outside-in" compression, or possibly uniform compression.

- As the magnetic field strength increases (beta decreases), the compression region shifts to the right on the relative velocity plots. However, this does not mean that stronger fields are capable of compressing stronger shocks. The mach numbers would also be shifted to the right on this relative velocity plot. Despite this fact, the mach number plots show a similar trend. So yes, stronger fields can compress stronger shocks.

- At some point, the relative velocity becomes so small that a shock is no longer formed. This is the same as the mach number going to 1.0. This means that the pressure and magnetic field do not follow shock jump conditions, and thus the net force returns to zero.

- The peak
**magnitude**of the force increases as beta decreases. This confirms that stronger fields compress with greater force.

# Is time to go unstable some constant fraction of the free fall time of the ambient?

Attached PDF with calculations.

Basically it does seem a pretty good approximation that the collapse time for the BE sphere is constant (t~0.2) for the non-compression wave cases. I checked this in the simulations, using the time it took between sink formation, and monotonic increase in density past original values (since in non-compression wave cases we see breathing modes). On the other hand, it seems not as straight-forward to gauge collapse time for strong compression wave cases. If I take it as beginning at the time I see monotonic increase of core densities above initial values, the compression wave cases lead to short collapse times.

No clear trend in the fraction of free-fall times is observed. You can check the pdf values in the attachment.

# BE turnover updates/discussion

The time interval we chose to test stability of the BE sphere in the light case was 5 sound crossing times of the SPHERE. Perhaps if we had known better at that time, we would have chose some time-scale for the ambient. But, we were interested in the stability of the sphere, so it seemed reasonable to choose that timescale. 5 sound crossing times = 1.1 for the sphere. It is different for the ambient in the different cases, given the change in ambient sound speed.

In looking for the turn-over, I found the 1/50 case did not collapse in t=1.1 (which is 5 tsc for the BE SPHERE), but the 1/45 case did.

A closer look at the timescales operating in the AMBIENT show why.

For the ambient, tff=1.6, slightly longer than the simulation time of t=1.1. Perhaps the ambient did not have time enough to collapse sufficiently so to trigger collapse onto the sphere. Running the simulation out longer showed the ambient to collapse far enough down to trigger collapse of the sphere. I found it only needed a little longer, and the BE sphere collapsed fully to a sink particle by t=1.4.

A movie (http://www.pas.rochester.edu/~erica/50LineOutLonger.gif) shows some interesting behavior of the ambient in this small time period between t=1.1 and 1.4. It is also of interest that the BE sphere collapse is triggered on a timescale similar to the timescale of the ambient's gravitational infall. Does this trend hold up in the different cases?

In checking the tsims compared to the tffs, I see that the trend holds for jeans stable mediums, but that as the medium becomes jeans unstable, the simulation time becomes much shorter than the free-fall time of the ambient. Maybe this is because the time-scale for the jeans unstable medium is more accurately given by some faster timescale?

Why did 1/45 (the seeming turn-over point) collapse in the time interval t=1.1? A look at the table, http://www.pas.rochester.edu/~erica/BEcalcs.pdf, shows that for the ambient, tff=1.4, a little shorter than that of the 1/50 case..

I think this table shows some interesting trends. Also, it is interesting to consider the timescale of collapse for the BE sphere once the BE sphere goes unstable; is this a constant throughout the different runs? That is, say we take the collapse time to be the time it takes the unstable BP case's sphere to collapse (~.2), do all spheres collapse in t=0.2 (i.e. it takes t=tfinal-0.2 time for the sphere to go unstable). It is interesting that this timescale for the BP case is on order the sound crossing time for the sphere, as well as the average free fall time of the sphere. Other authors have suggested a more correct dynamical timescale is given by the thermal radius/the sound speed. When I calculated this however, it was was way off from the simulation time. I am not sure why.

As for the Jeans length of the ambient (calculated at t=0), the table shows that when the jeans length is ~2-10 smaller than the box size we see strong compression waves. When the jeans length is comparable to the box size, as in the 1/10 case, we see a turn-over in the type of collapse, changing from a strong compression wave early phase, to a redistribution of matter. I am curious if in a case like the 1/10, if I make the box 2-10x larger than the jeans length, will the collapse be qualitatively different.

Lastly, we talked about increasing the size of the 1/50 box so that it is strongly jeans unstable. On first thought, I would expect ambient collapse to happen on timescales given by tff, regardless of the size of the box. This seems justified given the 1/50 case is "jeans stable", but still collapses, and on a timescale specified by the density of the ambient. However, given the jeans stability of the ambient seems tied with the generation of strong compression waves, I am inclined to think that the timescales may change from a jeans unstable to stable medium. This is at odds with the assumption that within our box, we can approximate the collapse of the ambient as a uniform sphere with collapse time on order the free fall time.

Another point we have talked about before that I am not entirely sure of is this, given the matter in the box is finite and contained inside the box (i.e. we are not supplying a continuous flow of matter into the box at the boundary) might the ambient material see the edge of the sphere as a wall and build up on it in some way as to move toward a new HSE? In contrast, if we were continuously supplying new matter into the box, it seems this wouldn't happen because the matter will always be falling down to the BE sphere, and will eventually trigger collapse.

Maybe this equilibration of the ambient would be possible for a very stable BE sphere configuration..?

So in summary, there seem to be 4 points here I am making,

- How is the Jeans stability of the ambient tied with physical characteristics of the BE sphere collapse, such as early phase type, i.e. compression wave vs. matter redistribution

- How is the Jeans stability of the ambient tied with the timescale of the ambient collapse

- What is the timescale the BE sphere collapse operates on

- Can the ambient move to a new HSE with the sphere?

# Meeting Update 0923

TSF figures

Stellar Mass:

Accretion Rate:

Mixing Ratio:

TSF movies

Column density non-rotation:

http://www.pas.rochester.edu/~shuleli/tsf_paper//norot.gif

Column density parallel-rotation:

http://www.pas.rochester.edu/~shuleli/tsf_paper//parrot.gif

Column density perpendicular-rotation:

http://www.pas.rochester.edu/~shuleli/tsf_paper//perrot.gif

# Setting Orbital Parameters

I've had to do this a couple of times - and thought it might be good to post this here…

Given 2 bodies in orbit about each other - calculate the initial positions and velocities given the semi-major axis, the masses, and the eccentricity.

If we assume the orbit is about the z-axis, and that aphelion occurs when both bodies are on the x-axis then at aphelion the velocities must be in the y-axis and the locations are on the x-axis. This leaves us with 4 unknowns

.The center of mass residing at the origin implies that

and we have

which can be solved for

We also must have zero net momentum so

which implies that

Which is consistent with them having the same angular velocity at aphelion.

Now

And we know that twice the entire area of the ellipse is swept out in one orbital period

we have

or

which then gives us

which together with

define

and

# Meeting Update 09/23/2013 Zhuo

Only read chapter 15 of Riemann solver. I apologies about that. Quantum mechanics and E&M theory took a lot of my time. They have too much homework. I am trying to balance the time (or probably exploring new time). As for the 2D problem - the supersonic flow pass through a flat plate - this problem is meant to solve full NS equation and EOS. The difficulty is I don't have a Riemann problem solver to solver NS equations. All previous practice are just the implementation of Riemann solver to Euler equations.

# Meeting Update 09/23/2013 - Eddie

- made some better estimates for how long 3D runs will take ehansen09172013
- working on paper figures
- need to redo some 1D radiative shock models for Anna and Pat

# Meeting update 9/23

Beta = .5 runs completed, the problem with toroidal ambient is fixed (thanks Martin!).

Made a couple of goofs over the weekend:

- Spherical wind finished running, but the wind was a factor of 10 too fast
- The density plots for Beta =.5 did not switch between runs.

Fixed both mistakes and they are currently running now. The MHD movies should be completed/live by the end of the meeting.

Comparison plots-

- Did not quite work the way I expected it to

# Meeting Update 09/23/2013 -- Baowei

- Tickets
- new: #307 (BE module bug? ) from Andrew
- closed: none

- Users:
- Wrote to Clemson?

- Resources
- INCITE program of Argonne:

- INCITE program of Argonne:

1) Computing time on More than five billion core-hours will be allocated for Calendar Year (CY) 2014. Average awards per project for CY 2014 are expected to be on the order of 50 million core-hours for Titan and 100 million core-hours for Mira, but could be as much higher.

2) INCITE proposals are accepted between mid-April and the end of June.

2014 INCITE Call for Proposals is now closed

3) Request for Information for next year's call: https://proposals.doeleadershipcomputing.org/allocations/incite/;MicsLoginCookie=0pCnSQPWpzTs2v2GHrThlQ4N1tFhk8cRHYF4fRpMSq3nhxs0f55H!-460963996[[BR]]

4) Proposal preparation instructions: https://proposals.doeleadershipcomputing.org/allocations/incite/instructions.do

- Wiki was slowing occasionaly last week: hopefully fixed by Rich. Still need more memory on Botwin?

- Worked on
- Ablative RT: http://astrobear.pas.rochester.edu/trac/astrobear/blog/bliu09202013
- Sesame Table: finished the registration table. Need Jonathan to check before mail out. Shall I contact Jacques.
- Progress report for renewal: got the sections according to "XSEDE successful proposal guid". haven't include much other than the usage part (http://www.pas.rochester.edu/~bliu/Proposals/Progress_Report.pdf)

# Meeting update

- Poisson solver update - previous blog post http://astrobear.pas.rochester.edu/trac/astrobear/blog/erica09192013

- Ran rho amb = {1/60, 1/50, 1/40} rho(Rbe) simulations checking for point where the density of the ambient is just enough to force collapse. Found that 1/60 and 1/50 are too light to collapse, 1/40 collapses. I am running 1/45 now, will be done by morning.

- New intro finished for BE paper. The methods section (apart from description of physical characteristics of clump - such as massive enough to be site of massive star formation) is done as well. These can be read in the pdf that is here- http://www.pas.rochester.edu/~erica/BEsphere_EK_09.20.13.pdf. Still working on this list for paper edits:

- Discuss the turn-over, its implications, and analytical reasoning
- Change wording from isothermal general thing, to more specific case in conclusions/modify conclusions
- Add a discussion section
- Add an appendix on HYPRE and its use in astrobear
- Massive star forming cluster… add to methods the physical properties our clump has

# Athena more user friendly than Astrobear?

I heard through the grapevine that Athena is more user-friendly than Astrobear, and this has to do with the ease in getting it on a personal machine and being able to run it (within 5 minutes), and also that published tests are incorporated into the code, tests that can be run by the user and are referenced in the literature..

Makes me wonder if we should spend some time refining/condensing our intro user material, and making it easier (and faster) for new users to get the code up and running.

# Update on Poisson solver/operator splitting

The code runs and takes the 1st time step (hydro+source step), after which it chokes during the next hydro step. I checked the initial data for scaling (see https://astrobear.pas.rochester.edu/trac/astrobear/wiki/u/erica/TestingOpSplitting - note at this small of a perturbation (delta = 1e-6), the pressure is a constant because of rounding error), and output from poisson solver during that first step. Here is the poisson output-

phi -

discrete 2nd derivative of phi -

forcing function -

The convergence requirement, (max(abs(u(n+1)-u(n))) < 10e-5, is reached, although the convergence between the actual solution and the numerical solution seems spotty in areas (as seen when comparing the discrete 2nd derivative to the forcing function),

The code goes through the first time step and produces this behavior,

density -

velocity -

pressure -

It seems to be qualitatively correct in density and velocity. Gamma = 1.4, and so am not immediately sure if the pressure behavior is as expected..

Potential bugs are,

- gamma=1.001 (for which my formulas for jeans mode are derived) diverges before can complete the first time step, but using gamma=1.4 gets through the first step
- not sure on u in energy source term update formula (see wiki page on that - https://astrobear.pas.rochester.edu/trac/astrobear/wiki/u/erica/PoissonPlusHydro)
- Scaling, either in the code or scaling the constants such as scalegrav (also here- https://astrobear.pas.rochester.edu/trac/astrobear/wiki/u/erica/TestingOpSplitting)

- making some modifications to the routine where I am getting divergence, such as increasing the number of iterations, or how it is calculating an initial guess at pstar

- The convergence of poisson solver is not good enough

# SUs required for 3D pulsed jet runs

I took a look at the standard out from the 2.5D simulations that were completed on Stampede. Here are estimates for the total number of cell updates required at the end of the simulation and the number of SUs for the entire run:

Beta | Cell Updates | SUs |
---|---|---|

Hydro | 181139252 | 3145 |

5 | 144062238 | 5270 |

1 | 170121573 | 7638 |

0.4 | 164839549 | 11472 |

The total number of cell updates was easily obtained from the filling fractions in the standard out. The following equations were used:

N0 = nx*ny N1 = 4*N0*ff(0) N2 = 16*N1*ff(1) = 16*N0*ff(0)*ff(1) ... 2.5Dupdates = N0 + 2*N1 + 4*N2 + 8*N3 + ... = N0*(1 + 2*ff(0) + 4*ff(0)*ff() + 8*ff(0)*ff(1)*ff(2) + ...)

Now, going from 2.5D to 3D leads to many more computational cells. There is a factor of 2 from the fact that we can no longer impose symmetry and only simulate half of the jet. So to keep the same resolution the base grid goes from 42 x 420 x 1 to 84 x 420 x 84. That is 168x more cells on the base grid. Figuring out how many cells there are at higher levels is a bit trickier.

I analyzed the 2.5D data in Visit to get an estimate for what the filling fractions would be in 3D. I used the expression

r = coord(Mesh)[0]

Then, I made a pseudocolor plot, and queried the weighted variable sum for each AMR level separately. The ratio of these sums gave me the filling fractions for each level. Then, I used these equations to get the number of cell updates:

N0 = nx*ny*nz N1 = 8*N0*ff(0) N2 = 64*N1*ff(1) = 64*N0*ff(0)*ff(1) ... 3Dupdates = N0 + 2*N1 + 4*N2 + 8*N3 + ... = N0*(1 + 2*ff(0) + 4*ff(0)*ff() + 8*ff(0)*ff(1)*ff(2) + ...)

If I assume that the number of root steps remains constant then the ratio of (3Dupdates) / (2.5Dupdates) will tell me how much longer the 3D runs will take. However, there is an additional factor of 2 because Kraken processors are about half as fast as Stampede processors. Also, there is yet another factor of approximately 2 because 3D cell updates are more expensive than 2D cell updates. So to convert the SUs columns from table 1 to table 2 you would use:

3DSUs = 4*(3Dupdates)/(2.5Dupdates)*(2.5DSUs)

Below are my estimates for the number of cell updates, and how long the 3D runs should take on Kraken (with 10 cells per cooling length).

Beta | Cell Updates | SUs (millions) |
---|---|---|

Hydro | 1178305614616 | 81.8 |

5 | 956767405342 | 140 |

1 | 1132203286417 | 203 |

0.4 | 1361329586503 | 379 |

In total, this is about 804.1 million SUs on Kraken. Clearly, this is not going to work, so we have to lower the resolution. I redid the calculation for different resolutions. Below is a table of the total SUs for all 4 runs.

Base Grid | Levels | Cells/Lcool | SUs (millions) |
---|---|---|---|

84 x 420 x 84 | 7 | 10 | 804.1 |

84 x 420 x 84 | 6 | 5 | 140.4 |

84 x 420 x 84 | 5 | 2.5 | 24.5 |

21 x 105 x 21 | 7 | 2.5 | 24.5 |

21 x 105 x 21 | 6 | 1.25 | 4.03 |

21 x 105 x 21 | 5 | 0.625 | 0.576 |

So the only one that seems feasible is the last one. Now, one thing to keep in mind is that this Lcool is based on a cooling length of 4 AU which is what we think the strongest cooling regions within the jet should be. This corresponds to the high density and high velocity shocks (1000 cm^{-3} and 60 km/s). There are shocks that are stronger than this, especially at the head of the jet and near the injection region when pulses enter the grid. We may never be able to resolve these fully.

So even though a resolution might show 0.625 cells/Lcool, we are still fully resolving cooling lengths of 64 AU with 10 cells. This is adequate for most of the low density (100 cm^{-3}) shocks, and even some of the low velocity shocks regardless of density (< 40 km/s). So this should be fine for the clumps that have already traveled some distance. At the last frame of the simulation, this looks like it'd be good enough for at least 3 of the 5 clumps, maybe 4.

In other words, at this resolution, we do not resolve the cooling length at the beginning of a clump's "lifetime", but we do end up resolving it later.

## UPDATE

As was discussed in today's meeting (9/23), I redid all the calculations for frame 65 rather than frame 100. The ratio of (3Dupdates/2.5Dupdates) is different at different frames due to filling fractions being different. Furthermore, during a simulation, the filling fractions in 2.5D won't necessarily change in the same way that they do in 3D. The idea here is that the filling fractions around 2/3 of the simulation time are most characteristic of the time-averaged filling fractions.

I only redid the last table, since it is the most useful. The numbers improved, but it appears that these runs still require many SUs.

Base Grid | Levels | Cells/Lcool | SUs (millions) |
---|---|---|---|

84 x 420 x 84 | 7 | 10 | 645.0 |

84 x 420 x 84 | 6 | 5 | 116.6 |

84 x 420 x 84 | 5 | 2.5 | 19.79 |

21 x 105 x 21 | 7 | 2.5 | 19.79 |

21 x 105 x 21 | 6 | 1.25 | 3.028 |

21 x 105 x 21 | 5 | 0.625 | 0.424 |

# Journal Club 0917 Agenda

- RT instability and the initial conditions.

http://www.pas.rochester.edu/~shuleli/0615/RT.pdf

- Implementing SESAME table in AstroBEAR. What do we need for the NLUF simulations? What are useful to be put in the code?

- Frequent restart problem on Kraken.

Boundary Conditions (variables labeled ≥ 1 are known):

Set1:

Set2:

# Standard output for 2.5D Pulsed Jet runs on Stampede

# Equation of State functions

ideal gas equations

and definitions of sound speed and enthalpy

Here is a list of all of the various functions that are equation of state dependent in the EOS module

- P(e)
- P(h)
- e(P)
- h(P)
- T(e)
- T(p)
- e(T)
- p(T)
- c
_{s}(e) - c
_{s}(P)

Which can also be obtained from

- T(e) and its inverse
- p(T) and its inverse
- c
_{s}(T) - h(T) and its inverse - though not absolutely necessary

# Meeting update 9/16

- Included Temperature plots and mesh in newest runs, corrected expression for temperature.
- Found a bug in the hydro runs, so I re-ran them and am now currently processing the images again
- Creating a plot of low-res hydro simulations versus higher-res (25 zones/radius vs. 50)
- Re-running mhd runs with beta = .5

# Meeting Update 09/16/2013 Zhuo

- Summarized my method which was implemented in Second order Conservative MUSCL-Hancock Method with Total Variation Diminishing principle and HLLC Riemann Solver code. It is in my notebook.
- Reading this paperPrimitive, Conservative and Adaptive Schemes for Hyperbolic Conservation Laws

# Meeting Update 09/16/2013 - Eddie

## Pulsed Jets

### 2.5D

Runs are done and the paper is coming along nicely.

- Baowei completed the 7-level runs on Stampede. I looked at one of them last week, and it looked fine. Next step is to analyze this data and make the figures for my paper.
- The main text of the paper is more or less finished. I just need to put it in latex, and then I will send Adam a pdf later today. I will be working on the figures this week. It will also need some more references, and of course some edits.

### 3D

Conducting test runs on Kraken; using the rest of the old allocation which will expire at the end of the month.

- I tried a hydro run first. The resolution is 4x less than the 2.5D runs, which means in the strongest cooling regions I will only have about 2.5 cells per cooling length. The simulation was only able to advance 5/100 frames in 12,096 SUs (12 hrs on 1008 cores). This means that if filling fractions remained constant, it would take about 240,000 SUs to complete the run. However, filling fractions increase as the jet propagates, so realistically this run will take closer to 1 million SUs. We may have no choice but to back off on the resolution, at least for most of the run.
- There won't be that much time between the 2.5D and the 3D papers. Should these remain isolated publications, or should I give them titles that go together like a "series"?

## 1D Rad Shocks

Met with Pat last week, and we discussed the next steps that need to be taken. * He is working on figuring out some scaling factors in the Raymond code so that he can compare cooling rates to the ones that I gave him from AstroBEAR.

- I will be rerunning a set of 1D models and give the data to Anna for comparison.
- We discussed the paper that will come of this. Pat sees it as more of an AstroBEAR verification paper and a discussion of the cooling processes, rather than something that gives an in depth quantitative analysis of the magnetic field and emission lines. Hence, the bulk of the writing will be done by Pat and I.

# Meeting update

Here is my page on operator splitting, with derived update formulas I am using-

https://astrobear.pas.rochester.edu/trac/astrobear/wiki/u/erica/PoissonPlusHydro

Here is my page on setting up the Jeans instability for my code, including how I scaled the gravitational constant,

https://astrobear.pas.rochester.edu/trac/astrobear/wiki/u/erica/TestingOpSplitting

The results section is not filled in, as I am still working out some kinks in the code. There seems to be a bug somewhere that is causing nans…

Along with this, will be running some new BE sphere simulations, and preparing for the MHD colliding flows simulations, as well as editing the paper with Phil's suggestions.

I am not sure why this code is not giving me an oscillation in p init (W3 array),

DO i = 1, cells

x = REAL((REAL(i)-0.5)*dx)

W1(i) = rho0 + delta*Cos(k*x)

W2(i) = -(delta*growthrate*lambda)/(rho0*2*Pi)*Sin(k*x)

W3(i) = p0 + delta*((W1(i)*rhoscale*Kb*100)/mh)/pscale !ideal gas law in cgs for pressure, over pscale

WRITE(85, *) X, W1(i), W2(i), W3(i) !Compute conserved quantities

CS(1,i) = W1(i)

CS(2,i) = W2(i)*W1(i)

CS(3,i) = 0.5*w1(i)*w2(i)*w2(i) + w3(i)/(gamma - 1.0)

END DO

# NPR bubble

Spherical wind blown bubble for NPR on Voyager poking through the heliopause.

movie 1: http://www.pas.rochester.edu/~martinhe/bubble1.gif

# Meeting Update 09/16/2013 -- Baowei

- Tickets
- new: none
- closed: none

- Resources
- Mercurial repository & /cloverdata/ : working with Rich to get /cloverdata/ back and get the repository a standard name
- Teragrid: two weeks left for Martin's allocation Stampede 8% | 37,083 SUs remaining Kraken 22% | 260,046 SUs remaining
- Renewal report

- Worked on
- 2.5 Pulsed Jets Runs
- Multi-threaded scaling test of AstroBEAR3.0 on Blue streak
- Read about SESAME TABLE

# BE paper

The paper is attached to this blog post.

# NLUF movies

Hydro

http://www.pas.rochester.edu/~shuleli/NLUF_sep/hydro.gif

Beta = 10

http://www.pas.rochester.edu/~shuleli/NLUF_sep/beta10.gif

Beta = 1

http://www.pas.rochester.edu/~shuleli/NLUF_sep/beta1.gif

Beta = 1 (different floor temperature)

http://www.pas.rochester.edu/~shuleli/NLUF_sep/difwind.gif

(new movies posted)

http://www.pas.rochester.edu/~shuleli/NLUF_sep

# Meeting update

I have been reading some material on outflows, including PPVI chapter and Plunkett. My code is done except for some scaling that I am still trying to work out. Will have a page on the operator splitting when it is all done. I looked at the poisson solver performance for the periodic bc's of a 1D sine wave. Here is a plot of the difference between the discrete 2nd derivative and the initial density distribution, as a function of position. I am not sure why the low values of the error right at the ghost, physical cell boundary, but other than that the trend seems good — highest error near the edges, and smaller as you move interior. Also, I uploaded my poster to PPVI website.

# Meeting Update 0908

I finished four NLUF runs:

- hydro

- beta = 1 with Spitzer resistivity (temperature cut-off at 3000k)

- beta = 10 with Spitzer resistivity (temperature cut-off at 3000k)

- beta = 1 with Spitzer resistivity (temperature cut-off at 1000k)

The animations should all be up tonight.

The final TSF run is currently at frame 83 of 200. I'm constantly getting restarted because of high CFL number. This run should finish this week.

# Meeting Update 09/09/2013 - Eddie

The issues I have been having with the pulsed jets simulations seem to have been resolved. I had set up a bunch of test runs with different resolutions to try and diagnose the problem, but none of the runs triggered restarts (PulsedJets). There must be something in the data files or maybe a slight modification that I made to my code which fixed the problem.

The output of my Kraken runs looks fine (I'll be posting results on the wiki soon). Baowei has been running MHD versions on Stampede (bliu09092013). These also look good, so Baowei is going to let these runs finish.

Here is my plan for the rest of this week (in order of priority):

- Finish writing the 2.5D pulsed jets paper
- Make figures for the paper
- Use rest of the time on Martin's kraken allocation to test 3D pulsed jets

Also this week is the LLE meeting on Thursday. Hopefully, I will also get a chance to talk with Pat about our 1D radiative shock models.

# Meeting Update 09/09/2013 -- Baowei

- Tickets
- new: none
- closed: none

- Resources
- Teragrid Proposal Renewal: waiting for the progress report paragraphs and publications with work using Martin's allocation — https://astrobear.pas.rochester.edu/trac/astrobear/blog/bliu09042013
- Martin's allocation usage: Stampede 12% | 53,227 SUs remaining ; Kraken 23% | 262,082 SUs remaining

- Worked on 2.5 Pulsed Jets —https://astrobear.pas.rochester.edu/trac/astrobear/wiki/PulsedJets. Waiting for the green lights to complete the rest of MHD runs with 7 level amr

- hydra: completed with data located at /bamboodata/bliu/Eddie/PulsedJet_2.5D/Fiducial/hydra/

- MHD Beta=5: 47 frames

- MHD Beta=1: 41 frames

- MHD Beta=0.4 : 23 frames

# meeting update 09/09/2013 Zhuo

wrote a program using MUSCL-Hancock Method with TVD principle (MINBEE slope limiter). Details can be found here MHMTVD

# NLUF stuff 3

The results from new runs with correct temperature setup (0.026ev in the ambient, 0.0012ev in the target initially, 0.026~0.8ev in the ambient, 0.0012~0.3ev in the target after preheated by radiation). The Spitzer resistivity has a floor temperature: any temperature below 0.26ev (3000K), is treated as 0.26ev when calculating resistivity. This means that only after preheating, the hot end of the target will start to diffuse magnetic field.

# NLUF stuff 2

Update: notice that these runs are really beta = 1000 and 100. The correct beta runs with beta = 10 and 1, with a cold initial setup that matches the experiment will be posted later tomorrow.

Resolution: 64 zones per clump radius. All the MHD runs have Spitzer resistivity (~T^{-3/2}) turned on. There is no post-shock B field imposed, only the initial ambient is magnetized.

Density comparison for hydro, beta = 10, beta = 1 (notice beta = 1 is under refined)

beta comparison for beta = 10, beta = 1 at the end of the runs (the scales is: 100, 1000, 1e+4, etc)

The comparison of density of the two low res AMR runs for beta = 10 and beta = 1.

The comparison of initial and final temperature for beta = 10 run (in eV).

# Acknowledgement for publications of work on Teragrid machines

Publications resulting from XSEDE Support are required for renewals and progress reports. Papers, presentations, and other publications that feature work that relied on XSEDE resources, services or expertise should include the following acknowledgement:

**This work used the Extreme Science and Engineering Discovery Environment (XSEDE), which is supported by National Science Foundation grant number OCI-1053575.**

# Meeting update 09/03

- Ideas on protection improvements
- Radiation update
- Finished Explosion test (does not test explicit terms)
- Both explosion test and marshak wave test seem to work with AMR
- Implemented advection test (tests both implicit and explicit terms)

# Meeting Update 09/03/2013 Zhuo

- Updated mypage
- Read chapter 13 and 14 in Toro's book. They make sense to me in general, but the detail, reason, motivation is not clearly understood. Anyway, I know how to implement MUSCL-Hancock Scheme and will write its program to solve Euler equations this week. My interest in these two chapters is about the functional analysis in deriving Total Variation Diminishing (TVD) theorems, in overcoming Gibbs phenomenon (happens in the presence of high gradient of data), in devising specific schemes. Still, I also hope that Fourier analysis and Spectral theory can be implemented in Riemann solver sufficiently.

# Meeting Update 09/03/2013 - Eddie

### Pulsed Jets

Spent most of my time last week trying to get the 2.5D pulsed jet simulations working. Met with both Jonathan and Baowei a few times. We had decided on a set-up to try on kraken that we were sure would work. However, restarts are still being triggered, and it is still unclear why.

From previous runs, the refinement looked strange. There were small patches of high refinement scattered throughout the jet beam. After changing a few parameters and adding some error flag buffering, I was able to clear this up. The latest kraken run is at the resolution that I want. It does density gradient based refinement up to level 3, and cooling time threshold refinement to level 7. You can see from the first few frames that the mesh looks reasonable…

Frame | Density and Mesh |
---|---|

0 | |

1 | |

2 | |

3 | |

4 |

Baowei was making better progress on bluehive and on our local machines, so I decided that maybe the problem with the restarts is machine dependent. I have a run going on Grass which seems to be progressing just fine. It would probably take on the order of weeks to finish completely though since it's only running on 8 cores.

I have a run on bluestreak using 256 cores which is progressing faster than the run on Grass, but is also triggering restarts. I am currently setting up a run on bluehive, so we'll see how that goes.

### Other Stuff

- Haven't heard back from Pat yet regarding the numbers I had sent him.
- I plan on finishing the paper within the next week or two. I will write it as though i have all the results and figures, which can be added in later. Maybe this will just be a 2.5D paper though?
- Went to a Geneseo physics alumni event last Saturday. We were celebrating the physics department's 50th anniversary, and also the fact that Geneseo now leads the country in number of physics bachelor degrees awarded yearly. There were some other interesting things as well…
- I briefly met their new astronomy professor, Annie Pellerin.
- My work that I did as an undergrad there is referenced in a new textbook by Koberlein and Meisel (a former professor of mine), and it is titled
*Astrophysics Through Computation: With Mathematica Support*. - They're looking for potential colloquium speakers and asked me if I would be interested. So I might be giving a colloquium there either this semester or in the spring.

- RIT has a student named Dave Anderson who is a senior this year. He's interested in going to graduate school for astronomy, and he's considering UR. More specifically, he is interested in our research group and would like to meet all of us to see if this might be a good fit for him.

# Meeting Update 09/03/2013 -- Baowei

- Tickets
- new: none
- closed: none

- Resources
- Wiki on Botwin
- Teragrid: MAGNETIC TOWERS AND BINARY-FORMED DISKS Stampede 13% | 56,652 SUs remaining Kraken 28% | 324,717 SUs remaining

# Meeting Update 0903

- TSF simulation is taking longer than I expected. Currently the only unfinished run is the perpendicular rotation Mach1.5 one, should finish within next week. But we shall start thinking about what to say in the upcoming paper (meeting this week?).

- Coded up a rudimentary ambipolar diffusion explicit solver using the single fluid model described here:

http://iopscience.iop.org/0067-0049/181/2/413/fulltext/

We can run simple oblique shock tests to verify it in coming weeks. Could be helpful for some future MHD turbulence sims etc. Two-fluid algorithms can be found in recent literatures, for instance:

http://www.sciencedirect.com/science/article/pii/S1384107611001321

but requires substantially more coding effort.

- Since the current code takes very long time (especially on the Spitzer diffusivity problem in NLUF project) on solving fast diffusion problems with explicit solver, I have picked up the code for explicit subcycling and persistent data structure for communications during subcycling. The alternative is to code up hypre support for implicit solving, but it requires an additional divergence cleaning algorithm implemented for both resistivity and ambipolar diffusion.

- Currently going through the process of polishing CV and writing statement of purpose for postdoc fellowship applications. Some of them ends end of Sep ~ Oct, so not much time left. I will also apply for summer interns 2014 at tech companies later this semester, and possibly a job in next fall, so I'm reviewing interview problems (mostly on coding and algorithms).

- NLUF project: first shots will be taken this week, I will be participating in the post-shot meeting on Thursday.