# Meeting Update 07/01/2013 Zhuo

- Read Chapter 3 and 4 of Riemann Solver.
- Read Martin's paper in depth http://arxiv.org/abs/1211.1672v2 "The Formation and Evolution of Wind-capture Disks in Binary system". I built a different model (It could include thermal non-uniformity and I hope to include orbital change in the future) for elliptic orbit, I want to discuss it on Monday's seminar.

There are three references in this model.

Outside the computation box:

We can get

field in the reference of co-rotating primary after integration. So that we can get the consistent flux on the boundary of the computation box.Inside the computation box (include the boundaries): First compute the

field in the reference of co-rotation companion.

Special attention should be paid to the boundaries since they may swell or shrink along the orbital motion. Which introduce additional flux.

To include orbital change:

may be changing. is the relative location of companion to the primary, and is basically in the inertial reference. It's initial condition is exactly .

Therefore,

and is important to incorporate drag force and drag moment. These will give coupled orbital equation.Question: In low-mass companion binary systems, what is the order in all these phenomenons below :

**thermal non-uniformity, orbital change, mass change, frictional force, rotation of the companion and even EM field**

Remark: Resolution is important as computational matter, order is the "resolution" in physics and modelling.

# Radiation Transport Working

The Implicit radiation transport solver is working in parallel for fixed grid.

It takes hypre a long time to converge initially. I'm assuming this is because the opacity is so low yet the radiation field is very far out of equilibrium.

So after some investigation it looks like the matrix tends to be very ill conditioned (condition numbers of 1e9).

The solution is to start with a small time step and change the cfl relaxation parameter to be ~ 1e-3… If the hydro time step (or the radiative cooling time step) is ~10 orders of magnitude longer than the diffusion time scale - and the system is not in thermal equilibrium - then the matrix becomes too ill-conditioned for hypre to solve.

# Checking growth rate

Here is the theoretical growth rate (labelled mma) compared to AstroBEAR's (labelled visit) over 1 e-folding time-

They are within ~0.1% of each other by the end of the sim. Is this close enough, i.e. what we should expect??

Here is the same set up, over 5 e-folding times:

By the end of 5 e-folding times, however, they are within 5% of each other.

How does this data seem as a test for self-gravity in the code?

The initial perturbation is small, with amplitude 0.001*rho0.

# Not that anyone should do this carelessly

So I wanted to modify the list of reports to have the useful ones at the top… And since there was an open slot (report #2) I used sqlite3 to interface with the trac database in /data/trac/astrobear/db (after making a backup etc…) The id needs to be unique - otherwise one could break the trac database….

johannjc@clover db > sqlite3 trac.db sqlite> update report set id=2 where id=27; sqlite> .quit johannjc@clover db >

# Injection Boundary for 3D Pulsed Jets

This is related to ticket #289.

It looks like the point at which things really blow up was at a restart (restarted from frame 33, explosion at frame 34), so maybe there is something about my code that makes restarts a bad idea. Regardless, the simulation was at a hault before I tried doing the restart.

# Journal Club Meeting 0625

Agenda

Jean's instability:

https://astrobear.pas.rochester.edu/trac/astrobear/wiki/u/erica/JeansInstability

Ruka's jet problem, and comfort zone of AstroBEAR:

https://astrobear.pas.rochester.edu/trac/astrobear/wiki/u/smurugan/pn2013

Zhuo's BH accretion paper review:

http://arxiv.org/abs/astro-ph/0406166

# Meeting Update 6/24

- Had some problems trying use an old module of Martin's (need to get the version of astrobear he used)
- read some papers on post agb-binary stars
- still having problems displaying my data correctly in visit

# Erini - Meeting Update- June 24th 2013

- having scaling errors with polytrope profile
- I initially got it to produce a first cut off polytropic profile (domain size too small) that agreed with literature for the well documented case of n=3
- upon trying to fix the scaling errors - I am now getting NANS

- still chugging away, but ever closer each day

# Meeting update 6/24

- Created a wiki page for the Planetary Nebulae Project

- Redoing previous runs (everything not toroidal ambient) with the newest version of the module

- Trying a full grid run, just to see what happens

- Meeting with Martin on Thursday

# Meeting Update 0624

I've redone some of the tests mentioned last week, all from frame 0 (some of the runs in the table I posted last week are from restarts). This is what I found:

res | accrete? | second particle created? |

80+1 | yes | no |

80+2 | yes | no |

80+3 | yes | no |

80+4 | no | yes |

80+5 | no | yes |

160+1 | yes | no |

160+2 | yes | no |

160+3 | no | yes |

160+4 | no | yes |

320+0 | yes | no |

320+1 | no | yes |

320+2 | no | yes |

320+3 | no | yes |

The 80+4, 80+5 runs last week that worked are restarts from frame 50 (the frame right before particle creation) of a previous 80 base grid run. From the new runs, it looks like our issue is entirely on resolution: anything beyond 640 effective resolution cannot accrete after creating the first particle, and will create more particles to "compensate" that. The old restart runs are done on bluestreak, the starting and finishing runs looks like:

**Particle mass dependence** I then went on and did a few tests with switching between Federrath and Krumholz. The idea is to use Federrath for the first few frames after the particle is created, then switch to Krumholz to prevent "explosion". However, the result shows that at 80+5 and 160+4 resolution, even if we switch to Krumholz after particle mass grows to 10 (10% of initial cloud), the particle fails to accrete.

**Bluestreak performance issue.** I was out of town since last Thursday till Sunday, but I did manage to get a reservation on bluestreak last Saturday and did some runs. I did two runs on bluestreak featuring essentially the same setup: using frame 51 from an old run and 80+4 setup, both run 20 frames (to frame 71). Stampede took 6 hours on 1024 cores, bluestreak took almost full 24 hours reservation time on 8192 cores. The full simulation is about 200 frames, with frames 50 to 150 more "intensive". Bluestreak needs nearly 10 days to finish one entire run, stampede needs about 2 days on 1024 cores. The output chombo files are of size 2.5GB, and I've seen the old BGP dealing with such problem size much much faster. I think the valid option now is to use Kraken for these runs (2048 cores estimated 2 days per run).

# Meeting Update 06/24/2013 Zhuo

- Read chapter 2 of Riemann Solver.
- Review the previous papers.
- http://www.netlib.org/lapack/

# Meeting update

Updates from last week:

- Pretty close with the Jeans Instability module. I wrote a page on the theory of the instability here- https://astrobear.pas.rochester.edu/trac/astrobear/wiki/u/erica/JeansInstability and a page on the module I wrote here- https://astrobear.pas.rochester.edu/trac/astrobear/wiki/u/erica/JeansTest. There is still a needed modification to the analytical function that I am working on. Given the initial perturbation is sinusoidal in x and exponential in t, I think what I need to have my module do is: 1. at t = 0 assign rho(x,0) = rho0 + 0.1*rho0*cos(kx) (i.e. prescribe a small amplitude perturbation, 2. for later times t=t', have the density distribution continually perturbed by an exponential function of t'. This I believe should take the form: rho(x,t) ~ rho(x,0)*exp(4*pi*G*rho0)t.In this way, the amplitude of the initial perturbation should grow in time and have a growth rate ~ 1/tff. The next thing I need to do then in the problem module is to define and store an array of initial rho. To store this array, I added it to the Info structure. However, I am having difficulty populating this array with Info%q(i, :, :, 1) in the problem initiation subroutine in problem.f90. It seems that this is a fortran language error I am making, but I should ask, if one would like to store information on fluid variables from a given time level, is this a reasonable place to store (inside of the info structure)?

- Plot of xi(t) for the intermediate 1/30 case- . Also, after thinking more of the critical pressure, I have convinced myself that it does change with time over the course of the sim, as the parameters of the BE sphere change.

- Read chapters 15 and 16 of Toro. These are 'splitting schemes for PDEs with Source Terms', and 'Methods for multi-dimensional PDEs', respectively. I think the general method for multi-dimension schemes is a VERY straight-forward extension of the 1D schemes. However, I still need to delve a little more into 1. how the additional (shear) waves are handled numerically — this involves reading up on how to mathematically/numerically separate the different dimensions in the eigen system, 2. How the geometric source terms can reduce the dimension of the problem, while maintaining mathematical equivalence to the higher dimension Cartesian counterparts.

Upcoming week:

- Journal Club - Jeans Instability??

- Write a 2D code (unsplit finite volume or multidimensional split scheme?)

- Perform a test with it (thinking the 2D cylindrical explosion in first section of Ch. 17). If I do this test, I need to work through the math of how the 2 sets of equations are equivalent (the 1D + source term of cylindrical and the 2D Cartesian version), and I need to make sure I can solve the 1D "numerical" test of this problem (again, this is devising a scheme that can solve for cells in r using a split scheme to handle the geometric source term) to compare with the 2D results.

- Finish the Jeans instability code and make figures for appendix

- Pcrit/P(r) plots? For which cases?

- Poster?

# Meeting Update 06/24/2013 -- Baowei

- Tickets

- Teragrid Proposal

- Worked on

- OoO

- will be out of office most of the time this week.

# Meeting Update 06/24/2013 - Eddie

- made movies of the 3D pulsed jet runs

beta | image | movie(s) |
---|---|---|

hydro | movie | |

5 | movie | |

1 | full movie; expansion (slowmo) | |

0.4 | movie |

- I tried increasing the size of the domain for the beta = 1 run, but the expansion started happening sooner…

- I also altered the refinement at the injection region in two ways: I used the cylindrical shape option instead of a rectangular prism, and I increased the shape size by 50% in all directions. The shape of the mesh didn't change (it's still rectangular), so I don't know if the cylindrical option did anything differently. These runs haven't finished yet, but I can already tell that the beta=0.4 run seemingly worked better. The image below looks like an improvement, but the simulation on kraken has come to a hault just like the old one did.

- Made image/movie of emission maps for the 2.5D runs

- working on results section of paper and adding more references

# Teragrid Proposal of July 1

- Allocations:
- Requested: 7100000
- Awarded: 3422821

- Referee reviews:
- This is a new request for 7 Million on Kraken to study astrophysical flows by a large team of researchers (5 PIs including several early career scientists) and supported by a large number of awards (5, including 1 NSF award). They made use of a start-up grant to analize the performance of their code, astroBEAR, which is adapative mesh, and the proposing team is the same as the development team of this code. Provided is the strong scaling for the resolution they plan to run (128 + 4 levels AMR) on the target resource (Kraken) and it demonstrated good scalability. They make the point that the AMR code is 100x faster than the equivalent computation that has fixed grid resolution so their strategy of using AMR is very helpful for this research. Overall a good proposal. There wree a few shortcomings. I would have liked to have also seen a weak scaling for that resolution, or a smaller size, as well as some justification as to why they chose the resolution they did. They do not provide information about the experience of their team, but it seems they have expertise covering HPC aspects and consider code optimation. They also didn't mention whether they have local computing resources. They don't describe what beta is, or how the various angles lead to different results, why the chose the angles they did, and how that will lead to a successful investigation. they mention they will save 150 frames of each run, but it is not clear whether that is only a subsample of the total frames that will be run. They do not give units for the runtime in Fig 3, and dno't give walltime for how long a frame takes but they simple say it takes 6000 SUs. On balance, I would not recommend full allocation, but they have made a case for getting an award of about 50% of their request.
- This is a good proposal with all of the relevant information present. However, I could not find any previous usage of the group, except for some roaming allocation. The code is appropriate for the proposed computations and the scaling is fine. Because of the short track record I'm hesitant to recommend full funding. I recommend to grant half of the request. Kraken: 3.5 MSU Storage: 2500

- Important Factors
- Funding: NSF supported Computational Research Plans have priority over all non-NSF supported components.
- In the (usual) case where both non-NSF and NSF funding are involved, the Recommended Allocation is split into NSF and non-NSF portions* 3.The non-NSF portion of a Recommended Allocation is reduced by the global faction times an additional factor (greater than 1).

# Journal Club Meeting 0618

**Agenda**:

In this week's journal club, we will cover the following topics:

Ruka' simulations:

https://astrobear.pas.rochester.edu/trac/astrobear/blog/smurugan06172013

Eddie's Ticket on 3D pulsed jet:

https://astrobear.pas.rochester.edu/trac/astrobear/ticket/289

False particle creation in triggered star formation (ticket 288):

https://astrobear.pas.rochester.edu/trac/astrobear/ticket/288

https://astrobear.pas.rochester.edu/trac/astrobear/blog/shuleli06172013

Zhuo's paper reading on Bondi-Hoyle-Lyttleton accretion:

http://arxiv.org/abs/astro-ph/0406166http://arxiv.org/abs/astro-ph/0406166

# mathjax working again

I successfully installed a version of mathjax - that supports the inline features.

You can use

\(a=b\)

for inline equations ie. \(a=b\)

or

$$a=b$$

for block equations

note that the old latex macro still renders images on the server

[[latex($a=b$)]]

gives

so the only problem will occur if people have used

[[latex($a=b$)]]

(or removed their dollar signs when we switched from tracmath to tracmathjax)

# Meeting Update June-17th Erini

- Got the code to compile

- It's freezing and doesn't run - trying to figure out where it's crashing!

- So here is what I'm trying to change in terms of initializing:

putting the profile object on the ambient object

having the ambient density and pressure be the boundary density and pressure for the polytrope... ideas?

# Meeting Updates 6/17/13

- Bluehive's been incredibly slow the last week, seems to have gotten better this afternoon
- Still working on getting the toroidal ambient to work with jets
- Editing module so that the diverging wind has an arc for opening angle instead of being flat
- Experimenting with different boundary conditions, trying full grid runs

Hopefully more pretty pictures next week!

# Meeting Update Jun 17

Have done a set of testing runs of TSF of Krumholtz accretion in patched refinement:

res | initial mass | accrete? | massive particle created? |

80+1 | 15 | yes | no |

80+2 | 3.58 | yes | no |

80+3 | 1.02 | yes | no |

80+4 | 0.65 | yes | no |

80+5 | 0.47 | yes | no |

320+0 | 40 | yes | no |

320+1 | 0.27 | no | yes |

320+2 | 6.52e-3 | no | yes |

320+3 | 4.35e-4 | no | yes |

As shown above, the creation of an unphysical massive particle (usually with mass 1e+4) seems to be correlated with whether the first created particle is able to accrete or not. Strangely, for any resolution above 320+1, the particle accretes at a significantly slower speed (usually at a ratio of 1e-4 times slower).

The 80+4 run provides 256 zones per cloud radius, while not ideal, it's decently fast for a high res simulation. We can go for 80+4 or do some more testing this week on 160 base resolution + 3 levels of refinement for the conference poster. For a paper, I think we should still shoot for 512 zones per cloud radius because that seems to be what most papers on shock-cloud interaction have been using recently. Either do 80+5 or 160+4 (slow), or figure out what's wrong with 320+3.

We need at least three runs: Mach = 1.5, Mach = 3.5, Mach = 1.5 rotating, these should finish in about a week.

I will be out of town for a short break from this Thursday till Sunday.

# Meeting Update 06/17/2013 Zhuo

- http://adsabs.harvard.edu/abs/1944MNRAS.104..273B Bondi and Hoyles' paper on the mechanism of accretion by stars.
- http://adsabs.harvard.edu/doi/10.1093/mnras/stt725 Formation and Evolution in binary by our group
- http://arxiv.org/abs/astro-ph/0406166 A Review of Bondi—Hoyle—Lyttleton Accretion
- First chapter of Toro's book
- ran some co-rotatingbinary (can find rho of ambient, cant find rho of the companion.)

# Meeting Update 06/17/2013 - Eddie

- Worked on paper: practically done typesetting up to the results section for which I am just starting the text now.
- 2.5D runs ran fine on Kraken, so those are done.

- 3D runs: hydro and beta = 5 runs finished, but the beta = 1 and beta = 0.4 runs have not been able to finish. I posted a ticket for this: #289. Kraken is down at the moment, but I will attach the standard out and other files to this ticket once Kraken is back up.
- Finished a colliding jets run at a higher resolution. I had emailed Sergei a few weeks ago to let him know that I started on this project, but I never received a response.

- Back to the 1D radiative shock models: I don't remember hearing back from Pat when I finished the runs and posted all those images: ehansen04222013. Is there anything I have to do for this project right now, or am I just waiting for something to happen on Pat's end?

# Meeting update - 6/17/13

- Calculated accretion rates for runs (non HS, seem high)
- figured out a second way to calculate accretion rates
- ran sims for HS equilibrium (still seem high)
- problem running HS sims with too low of an ambient density ("requesting restart due to nan in flux", says it would take 2.8 years)

- need to work on wind capture disk simulations

# Meeting update - 6/17/13

- Alfalfa needs a new video card pretty bad. Rich says he can order one, but needs the account number.

- Worked on revisions for main body of paper. The updated pdf can be found here — http://www.pas.rochester.edu/~erica/BEsphere.pdf I ran into some trouble describing the last intermediate run (the lightest ambient medium). At first we talked about the sphere slowly acquiring more mass than the critical level and going over the edge. Then we talked about the ambient equilibrating to a decaying exponential, and that resulting surface pressure exceeds threshold Pext. After looking at the mass of the sphere over time, I would say that this later point is the case. Not only is the sphere initialized with subcritical mass, but it never again reaches its initial value. This is slightly concerning to me, given collapse ensues.

Here is an image of the calculated values from visit-

I got these values of mass from the sphere by doing the following - 1) looking at the lineouts of density and determining the position of the discontinuity (xdisc), which we are calling the radius of the sphere, 2) calculating the total mass within the sphere of r=xdisc with a query function in visit.

I noticed that the only sphere that is actually initialized to be at the analytical critical mass is the Matched case. The others have decreasing Minit with decreasing ambient densities. This seems to be related to smoothing of the BE-ambient boundary.

- Wrote a Muscl-Hancock scheme for the 1D Euler equations. The code runs correctly, and will be posting my notes and results to this page here later today — https://astrobear.pas.rochester.edu/trac/astrobear/wiki/u/erica/MusclHancock

- We talked about going on to chapters 15-16 of Toro. Chapter 15 talks about source term implementation, and chap 16 goes through multidimensional schemes. Should I next be focusing on source terms and self-gravity, and then follow it up with multidimensional schemes? Or the other way around??

# Meeting Update 06/17/2013 -- Baowei

- Tickets
- new: none
- closed: none

- Users
- send the code to Ian

- Things to buy (as discussed in the past couple of weeks)
- Hard drive dock
- Poster tubes
- Video card for Alfalfa

- Meetings
- SC13 Nov 16 - Nov 22, Denver CO. Registration opens July 17th. Technical Program poster. Poster submissions due July 31, 2013.
- ASP Annual Meeting 2014

- Worked on
- optimize with OpenMP (#285)
- local users

# Journal Club Meeting 0611

Agenda

The Responses of Magnetically Sub-Critical Cores to Shocks http://arxiv.org/abs/1305.1716

Cross sections for tidal capture binary formation and stellar merger http://articles.adsabs.harvard.edu/full/1986ApJ...310..176L

# Meeting Update 6/10/13

Got almost all of the low-res runs working (no MHD yet)-

Stratified Jet

Diverging Wind (10 Degrees)

Diverging Wind (20 Degrees)

Stratified Diverging Wind (10 Degrees)

Stratified Diverging Wind (20 Degrees)

Toroidal Clump

For next week, we need to get the toroidal ambient working with the jets (currently throwing NaNs) and incorporate MHD to the diverging winds case.

# Meeting Update June 10th - Erini

Almost done with polytrope stuff

I reformatted everything so that input params were central density, Mass and polytropic index as opposed to central density, star radius and polytropic index…but i have now run into an issue

Trying to figure out a way to approximate cell length - I was previously difining it as: cell length = R_outer/npoints

Code finds R_outer AFTER the integration, any ideas as to how to find a good bound for cell length without radius being an input?

# Meeting Update Jun 10

**TSF** We have reached close to the cpu hours on stampede. Martin suggested we move the runs to other places. The current plan is to debug the particle refinement bug on Bluestreak.

**Papers** Finished writing the MUIV proceedings.

**NLUF** Finished implementing the Spitzer resistivity. We should be good to start running the experiment settings. One problem is to find a good place for the runs.

**Travel** May need to go to NYC again to submit some supporting material.

# Meeting Update 6/10/2013

- read review on planetary nebulae
- read paper on wind-capture discs in binary systems
- read over fortran basics
- ran simulations with varying ambient densities (1e-10 to 1e0)
- need to calculate accretion rate of simulations (need to find out how to get mass of point particle at the center of the disk)

# Meeting Update 06/10/2013 -- Baowei

- Tickets

- Users
- wrote to Uppsala University user, no response received yet.

- Promotional video
- uploaded to youtube channel: http://www.youtube.com/watch?v=epKYY1POl0s

- Worked on
- testing the OpenMP optimized code on bluehive
- local users
- reading planet atmosphere papers

# Study in the past week

- read http://adsabs.harvard.edu/abs/2006MNRAS.370.2004N Common envelopes and their relevance for planetary nebulae. It is written by Jason.
- successfully run CorotatingBinary.
- tried to give the governing equations of a specific binary system. I am still working on that. equations include: continuity equation; momentum equation; energy equation; equation of state; Maxwell's equation. gravity (poisson)
- keep practising f90/95 in linux

I did not forget to learn the theory of AMR, it's on my to do list.

# timinginfo

Added timers to every do loop in hyperbolic update

Here is the break down in terms of percent time and threaded speedup (using 32 threads)

id | serial time | speedup | 32 thread time | percent of total | description |

82 | 3.231047 | 9.128274 | 0.353960343 | 9.89% | predictor riemann solves |

80 | 3.121115 | 8.762267 | 0.356199486 | 9.55% | predictor riemann solves |

84 | 2.997481 | 9.587972 | 0.312629303 | 9.17% | predictor riemann solves |

69 | 2.919109 | 22.452476 | 0.130012788 | 8.93% | eigen system |

213 | 2.752028 | 9.994905 | 0.275343087 | 8.42% | final riemann solves |

210 | 2.63407 | 9.832083 | 0.26790559 | 8.06% | final riemann solves |

216 | 2.617439 | 9.956359 | 0.262891183 | 8.01% | final riemann solves |

23 | 1.504078 | 23.787341 | 0.063230186 | 4.60% | characteristic tracing |

25 | 1.499858 | 23.942636 | 0.062643812 | 4.59% | characteristic tracing |

20 | 1.495662 | 24.030097 | 0.062241197 | 4.58% | characteristic tracing |

19 | 1.38434 | 23.166956 | 0.059754937 | 4.24% | 2 spatial reconstruction |

18 | 0.689309 | 22.116419 | 0.031167297 | 2.11% | spatial reconstruction |

2 | 0.128533 | 19.941572 | 0.00644548 | 0.39% | cons_to_prim |

97 | 0.110934 | 20.652223 | 0.005371528 | 0.34% | upwinded emfs predictor |

96 | 0.099056 | 19.557486 | 0.005064864 | 0.30% | upwinded emfs predictor |

95 | 0.098898 | 19.709309 | 0.005017832 | 0.30% | upwinded emfs predictor |

225 | 0.091149 | 19.994068 | 0.004558802 | 0.28% | upwinded emfs |

159 | 0.090894 | 20.499695 | 0.00443392 | 0.28% | prim_to_cons2 |

147 | 0.090303 | 20.1791 | 0.004475076 | 0.28% | prim_to_cons2 |

And here is a nifty plot showing the time spend on each loop vs threading efficiency.

The lower right cluster is the 6 different riemann solve loops (3 dimensions for predictor and final) which shows the most promise for improving the speedup. The upper right is the calculation of the eigen system which seems to scale quite well. Then in the center is the spatial reconstruction and characteristic tracing

The weighted average efficiency is around 12% - mostly due to the riemann solves.

So the problem was the allocation of temporary variables inside of the riemann solvers. Different threads would allocate these arrays at the same time and they would end up being close in memory which presumably lead to false sharing. In any event, the scaling of the riemann solvers is much better - and is a small portion of the runtime. Now the threaded bits get an average speedup of 21. However the non-threaded parts now make up 90% of the time…

Here is an updated graph showing the scaling performance of the same do loops.

I also looked at reducing the time spent in calc-eigens. I suspected that the manipulation of a MaxVars by MaxVars static array might be responsible so I reduced MaxVars from 20 to 10. The result was a significant speedup

I also added timers for the source routines (even though there was no source terms) and they seemed responsible for the 90% of the time for the threaded runs. If I comment those out I get a 16 fold speedup with threads on a bluegene node for a fixed grid computation.

I next tried an AMR run but that seems to be running slower due to extra refinement. See attached screenshot.

Obviously the threading is producing noise which is triggering refinement.

So it turns out I was attempting to use allocated variables with the local module scope as private within a threaded do loop… But apparently this was not working with openmp. I changed them to be a static size and declared them with the subroutine scope and it seems to work fine now. And on a 16^{2}+2 field loop advection run, the single node threaded version was 20% faster than the single node mpi version

# Meeting Update Jun 3

**Conference** Did the VISA interview in NYC last weekend. We need to order some poster tubes if we don't have some yet ($10 on Amazon).

**Star Formation** Jonathan did a quick fix for the patched particle refinement. It looks better now for the uniform collapse problem: only creating 12 particles and no more particles added (similar behavior to the Federrath accretion). Although one problem is that they don't seem to be symmetric, as in the case of Federrath.

movie

Using this fix, I was able to rerun the TSF problem with Krumholz accretion. Currently at frame 54:

One problem I found is that similar to the uniform collapse, it creates two particles both of which are off-centered (only slightly). The position of particles are (x=1.3, y=1.5, z=1.5 being the center):

I also noticed the run being much slower than the previous Federrath run with the same resolution setting.

**NLUF** completed the implementation of Spitzer resistivity into the explicit solver. One thing I'm worried about is the time stepping of the code will be dominated by the highest resistivity in the grid, i.e. inside the clump where temperature is 100 times colder, resulting in a resistivity that is 1000 times higher. Therefore even if the environment has a high Reynolds number, it is likely that the overall time step will be very small.

# Meeting Update 06/03/2013 - Eddie

- Dealing with a variety of problems for my pulsed jet runs: ehansen05312013. The errors that I have gotten on bluestreak don't make sense to me. They reference a line in the code that looks perfectly fine.

- Have most of the text written for abstract, introduction, and methods sections of my paper. Just need to typeset it with equations and references.

# Study in the past week

- Learning Fortran 90/95 in Linux environment.
- Reading an article "The Evolution of Binary Systems", here is the link: http://www-astro.physics.ox.ac.uk/~podsi/binaries.pdf I read some parts of it and gain knowledge in binary stars.
- Reading "Common Envelops in Binary Systems" http://adsabs.harvard.edu/abs/1993PASP..105.1373I I read a few pages.
- Reviewed R-T instability and K-H instability (still in progress). Because I think K-H instability is important in binary evolution.
- Learning AstroBEAR, tried to run example.

# Meeting Update 06/03/2013 -- Baowei

- Tickets
- new: none
- closed: none

- Users:
- New: Dan (REU student), user from University of Waterloo (for research), user from University of Birmingham, UK(To investigate using Astrobear for simulations of colliding winds in binary systems and galaxy superwinds. I've used VH-1 a lot over the years.http://adsabs.harvard.edu/abs/1992ApJ...386..265S http://adsabs.harvard.edu/abs/2000MNRAS.314..511S and want to investigate AMR/MHD some more.)

- worked on

- wiki latex plugin update: colorful equations: https://astrobear.pas.rochester.edu/trac/astrobear/wiki/FluxLimitedDiffusion