Posts for the month of October 2014

Fly through test 2

Yesterday I got a new code and checked out development. I then made a new problem directory in my code to run Jonathan's problem module and .data files from BH2. The code made eventually after fixing the HYPRE path and CPP errors. It seems we didn't see the clump with the first test as I ran a 2D simulation, so this time my global.data has nDim = 3 and Gmx = 32, 16, 16. I ran the executable twice for 0 and 2 levels of AMR (see results below). I ran them both out for 100 frames, so the camera seems to move from one position to the next at a slower rate in comparison to Jonathan's simulation that only has a few frames.

Currently I am trying to understand how the camera works and run some simulations with a different camera position, focus and up vector. I'll post those results once they are done.

0 Levels of AMR

Chombo movie

bov movie

2 Levels of AMR

2 levels of amr chombo frame 0

Chombo movie

fly through projection

bov movie

Fly through test 1

Currently trying to replicate Jonathan's simulation johannjc09292014 so then I can apply his method to setting up the cameras to our problem module for CollidingFlows. So in Jonathan's problem module he sets up the cameras, makes an ambient to fill the grid, puts a clump in the ambient, and then puts a wind in the box.

Last week I made the code incorrectly due to my own lack of understanding. This time I created a new problem directory and threw in his .data files and problem.f90 to make a new executable. I did this using scrambler_3.0 that I copied from Erica's directories. I ran into an issue when I tried to make the code:

modules/Problem/problem.o: In function `problem_mp_problemmoduleinit_':
modules/Problem/problem.f90:(.text+0x3ef): undefined reference to `updateprojection_'
make: *** [astrobear] Error 1

So I went into Jonathan's problem.f90 and commented out

CALL UpdateProjection(Projection)

and it made the code without any errors. The results from the simulation are below. We still see rotation in the frame, however it does not look like Jonathan's simulation still. It rotates into the wall and then zooms up on it. I only included ¾ of the simulation since I did this on my personal laptop and it was taking too much time in visit while ssh-ing into Bamboo.

So now I have got a new astrobear and trying to do the same process again (this time hopefully not needing to comment the call to update projections).

movie

Here is what the chombo files look like:

movie

Essentially there is just a wind coming in.

Adjusting profiles for planetary sims

Modified temperature profile to avoid enormous density contrasts

Density and Pressure Protections

In physics.data, there are options for protecting density and pressure. These control the Protect routines in EOS.f90. This routine is called various times throughout the simulation to make sure the density does not go below MinDensity and the temperature does not go below MinTemp.

There is also a routine called PrimProtect in EOS.f90. This routine is also called to protect density and pressure. The difference is that it is called on the q-array when the code is calculating predictor fluxes and final fluxes. Due to the way the code is written, PrimProtect cannot make use of the different protection options in physics.data.

Below, I have described how each routine handles density and pressure protections.


Protect

If the density < MinDensity, then a variety of things can happen depending on the options set in physics.data. Because momentum and density are related, they are both protected.

Momentum

  1. momentum is conserved and nothing happens
  2. momentum is set to zero
  3. momentum is set using average nearby velocity (default)

Density

  1. density is set to MinDensity
  2. density is set to the minimum nearby density
  3. density is set to average nearby density (default)

Then, no matter what options are chosen, the code sets the energy to zero for some reason.

Pressure

This protection occurs when the temperature < MinTemp.

  1. temperature set to MinTemp
  2. pressure set to minimum nearby pressure
  3. pressure set to average nearby pressure
  4. temperature set to minimum nearby temperature
  5. temperature set to average nearby temperature (default)


PrimProtect

Since data about neighboring cells is not available to the routines that call PrimProtect, the protection is more limited. As before, the momentum and density protection occurs when density < MinDensity and the pressure protection occurs when temperature < MinTemp.

Momentum

  1. momentum set to zero

Density

  1. density set to MinDensity

Then, the temperature is set to MinTemp.

Pressure

  1. temperature set to MinTemp

Figures Paper - updated

Dynamics & Morphology

Col density X
Energy Spectra - B, KE, gravitational X
B v. n PDFs X
Beta maps X

Identifying Filaments & Characterizing their Dynamics

3D plot of filaments (at end of sim, across different runs), with and without streamlines
Pick out 3 or 4 best filaments then plot along filament angle between mean field and filament direction
Radial characteristics
Morphology identification, hub n spokes? Filaments and fibers?
Flow along vs. onto filaments

Put in terms of Observer

Velocity spectra and/or velocity dispersion X
Density spectra X
Polarization maps
Characterizing projection effects?

Amira Code

Identify filaments by hand and their characteristics
Code that into Amira
Make easy filament set and test with Amira
Then harder test case
Use to count different filaments in time

pictures

Meeting Update 10/27/2014

Visualized my fly-through data. It isn't right; seems like it just follows the corner of a box. Didn't implement the tracers last time since it was just a test of the code, so I figured it wouldn't look like our problem. That is the next step.

Meeting Update 10/27/2014 - Eddie

Last Week

  • Finished (for now) the simulations of colliding Al plasma flows: ehansen10212014
  • Did some nice little simulations of the NTSI for journal club: ehansen10232014
  • Implemented protection tracking for silent protections. These are the protections that occur for the calculations of the predictor fluxes and final fluxes. Have some questions on why these protections do what they do.
  • Got my MHD models of the 2.5-D pulsed jets to run. Beta = 5, beta = 1 completely finished, but beta = 0.4 only made it to 96/100 frames and is stuck.


This Week

  • For the beta = 0.4 model, which is stuck at frame 96…
    • It produces a new error that we haven't seen before in these simulations: "Src routine produced a NAN". The NAN is in the species tracers.
    • It tries to reduce the time step and restart, but eventually shows "Min Timestep Reached, Stopping, …", and the code exits. I haven't seen this before either. Is this a relatively new feature for the purpose of stopping a run that is essentially stuck so that it doesn't burn SUs?
    • I'm trying to get my final 4 frames, but I'm not sure if it will work out. It's not the end of the world if I can't get this to run. Worst case scenario: I just use frame 96 from all of my runs for comparison and for the figures for the paper.
  • Once I have the beta = 0.4 issue resolved, I will have everything I need to complete the paper revisions.
  • Working on implementing tracking of max speeds. Not just max speed of the velocities on the grid (sound speed and vx, vy, vz), but max speed of all velocities used internally within the code (e.g. the Riemann solvers).
  • Next thing to do is the Mach stem paper revisions.

Meeting Update 10/27/2014

  • Added the hydrodynamic escape parameter to the code.

  • Tested for several cases. Agrees with the values used by S&P.(Lambda = 14.82 for solar corona).

-Case for lambda = 10.6, 1 Jupiter mass, 10,000K. The ratio of rho-wind to rho-ambient was 100.

Movie here.http://www.pas.rochester.edu/~mehr/wikistuff/gamma10.gif

Whereas the S&P case for lambda=5 looks like this

Journal Club - 10/23/2014 - NTSI

NTSI due to 50% velocity perturbation:

movie

Drastically different without perturbation:

movie

NTSI still can't take over with a "weak" 10% perturbation:

movie

Figures for Christina's Runs

Christina's:

Plot Christina Erica
Total "Cloud" Mass (cloud = > 100 cm 3) over time X
Total sink mass/(cloud mass + sink mass) over time Will work on
Mass in cold gas (T < 300 K) X
Density weighted histogram of ratio L-R X
Histogram of (L-R)/(L+R) in cloud over time (color plot) I'll work on
Histogram of (L-R)/(L+R) in sinks over time (scatter plot with color on top) Work on
Standard deviation of histogram at end time, spread across shear
% prograde vs retrograde sink particles X
or % mass in sinks that is prograde vs retrograde X
specific angular momentum density (scatter plot histogram to show spread X
histogram of specific angular momentum density for whole cloud - does it correlate with shear? X
Bulk magnetic, gravitational, thermal, kinetic over time (in box or clouds?) X
Energy spectra I'll work on
Energy in sinks over time I'll work on
n-T histograms at final time X
velocity dispersion I can work on
column density maps at times when gas > 1021, 1022, 1023 X

triggered star formation outflow comparison

More colliding Al flows

Changed the set up a little bit so that there is no gap now (no ambient). The perturbation is now pseudo-random. The bottom half of the domain is initialized with either a density or velocity perturbation which contains multiple sinusoidal modes with random phases. Below is an image of an initial 5% amplitude density perturbation.

I ran 4 models: density and velocity perturbations, 5% and 10% amplitudes. All simulations were run to 300 ns.

Density Pert. Velocity Pert.
5%
movie movie
10%
movie movie

I also made lineouts of density and temperature across the middle of the cooling region.

Density Pert. Velocity Pert.
5%
movie movie
10%
movie movie


No cooling

For comparison, here is a 10% velocity perturbation run without cooling.

movie movie


Inflow perturbed

One idea that we had was to keep the inflow from the bottom boundary perturbed with the same perturbation that is initialized in the bottom half of the domain. This drives the perturbation for a longer time and should lead to more dynamical effects. The perturbation is again 10% on velocity.

movie movie

Meeting Update

Will be working on data analysis tools this week and job restarts- need to consider our resources

Meeting Update 10/20/2014 - Eddie

Wrote a program to compute convolution of emission map with Airy disk. This could be included in the code, probably in projections.f90, but I'll talk with Jonathan later about the best way to implement this if we want it.

Here is the Airy disk that I used for my convolution:

To see how this changed my emission map, here is an image of the original jet alongside the new emission map after the convolution. This process basically blurs the image a bit to more accurately represent what the telescope would see. In this case, this is what HST would see if my jet was located at Orion.


Back to Sulfur tracking results

Since implementing Sulfur tracking, I have only been able to successfully complete the hydro run for my 2.5-D pulsed jets. Below is an image showing the original emission map alongside the new one. As expected, allowing for SIII and SIV species means less SII and thus "greener" (H-alpha/[S II] is higher) emission.

Still having restart issues with the MHD runs. It looked like lowering the CFL might have helped at first, but the restarts still occur at later times. The restarts also occur at different times depending on how many cores are used. This leads me to believe that the real problem might be how the grid is distributed and how the AMR is refining.

Meeting Update 10/20/2014

  • S&P also track the subsonic to supersonic wind profile against radius. Need to investigate that.

Update with 1024cores

I ran the Beta 10 No Shear case on Stampede with 1024 cores. Here is the result (see Table below):

So if we're at frame 246, we have 154 frames left. So dividing 154 by our rate, we have 9.4 days (225.6 hrs) to run this simulation out. Thus, 225.6 * 1024 = 231,014.4 cpu hrs. Multiply this by 4, as we have 4 runs, yields approximately 924,057.6 cpu hrs total. This is not much different then the total result from last week. It does not seem economical to run these on 1024 cores; in my opinion we might as well just run these on 2048 cores as they'll be faster but have little to no shift in cpu hrs.

Perhaps we should choose just a few cases on 2048 cores?

If on 2048 cores we estimate 34.85 frames a day (average of rates from last blog post) with approximately 164 frames left (average from last blog post) that implies that we have approximately 5 days to run a simulation, or 113 hours. This is approximately 231,304 cpu hrs. With 3 runs, that is 693,911 cpu hrs. With 2 runs that is 462,607 cpu hrs.

Perhaps we could split the runs between machines? However we aimed to use Stampede because it is so fast in comparison to the likes of BlueStreak.

Run (Current Frame) Hours Sampled Time (mins) Avg. time (mins)
b10s0 (246) 23:30 - 00:50 80
10:27 - 11:58 91
19:44 - 21:16 92
87.67

Table. The Beta 10 Shear 0 run with current frame for which the hours were sampled and averaged to do the brief calculations above.

Theoretical model and numerical result of wind without perturbation

Below is the numerical result from AstroBEAR.

During time =0.20 and time=0.25, a pulse of wind is launched. Density and velocity vary with time.

3_6.gif

Update 10/16/2014

I'm studying for the physics GRE so I might be locked away in a library for the next week in a half. Wish me luck.

Made some VisIt documentation! Weeeeeeeeeee

Working on fly-throughs for the CollidingFlows problem by referencing Jonathan's blog post. So far I've implemented his problem.f90 and got the code to make properly. I've got some results to visualize, so I'll post those later when I get around to it.

Managing some jobs on Stampede, however things are pretty expensive.

cpu hrs for CF runs on Stampede

Running the CollidingFlows problem out from frame 200 to 400 to double the time and see if we can observe any more sink formation. Given that this run is really computationally intensive, I've done a quick calculation for cpu hrs based on some current runs I am doing on Stampede. All runs are in the normal queue for 24 hrs on 2048 cores. The table below provides the current frame number at which I collected this data. We can see that the average time for our code to spit out a frame is (underscores correspond to the run):

Given that we have 1,440 minutes in a day, implying that we'd spit out the following frames per day:

Considering that the difference between the current frame and the last frame (400) for beta10 shear 0, 15, 30 and 60 respectively are 179, 182, 159, and 136, we're looking at running these out for approximately 5-6 days on 2048 cores. Specifically for b10s0: 5.5 days, b10s15: 5.8 days, b10s30: 5.2 days, and b10s60: 3 days. Using this number of days, that there are 24 hours in a day and we'd run these on 2048 cores, this puts us at a total of: 957,973 cpu hrs. THAT IS INSANE.

After a quick discussion with Erica and Baowei I've come up with the following short term plan: Once these jobs stop later today, I'll submit 1 job to the normal queue on 1,000 cores. For this run I'll make the same calculation and see if it is more economical when multiplied by 4. Baowei has also suggested to throw runs on Gordon, another machine owned by the Texans. We have a lot of SUs there, so he is currently setting me up. We currently only have 1,551,296 SUs available on Stampede — so running our jobs for this problem there could be quite precarious.

Run (Current Frame) Hours Sampled Time (mins) Avg. time (mins)
b10s0 (221) 16:07 - 16:55 48
02:58 - 03:38 40
07:13 - 07:58 45
44.3
b10s15 (218) 18:03 - 18:48 45
02:19 - 03:03 44
11:05 - 11:54 49
46
b10s30 (241) 17:57 - 18:40 43
00:26 - 01:23 57
07:03 - 07:44 41
47
b10s60 (264) 17:40 - 18:07 27
00:04 - 00:38 34
07:43 - 08:18 35
32

Table. Each run with current frame for which the hours were sampled and averaged to do the brief calculations above.

Colliding Al Jets

Using typical lab parameters, I ran simulations of colliding Al jets. To simplify the problem, I ran in 2-D with the jet width = domain width. This essentially reduces to colliding two slabs.

The slabs are initialized with opposing velocities, and the bottom slab has an initial 10% sinusoidal velocity perturbation on its interface.

I ran two cases: the first is adiabatic, and the second has a special Al cooling table.


Adiabatic

As expected, the slabs collide and create a hot central shock that propagates back towards the boundaries. The central region remains hot and thus at a low density. The interface maintains an awkward shape due to the perturbation, but this shape is fairly constant.

movie

movie


Cooling

Note that the legends are much different. This is because I'm trying to highlight the regions of high density and low temperature. With this scaling , you don't see much in the density until about halfway through the simulation. Thus the cooling_rho.gif movie is only of the second half of the simulation.

The simulation starts out differently because cooling occurs immediately at the interfaces of the "jet heads". This quickly morphs into a similar configuration as the adiabatic case in which there is a hot central region with shocks propagating back towards the boundaries.

However, since cooling is now on, this central region eventually cools and forms dense regions. There is a periodicity to this because of the chosen perturbation.

movie

movie

There are times at which these high density regions look "clump-like". The key here is in the perturbation. The high velocity regions of the interface collide first and can thus start cooling sooner. Their shock strength is also slightly higher which leads to stronger cooling. This is where high density knots can form.

So unless the jet heads collide at the same time perfectly along the entire interface, you should always get high density knots within the post-shock region.


NTSI

I found that the perturbation that I was using was either too short-lived or not strong enough. I altered it a bit, and got some nice looking instabilities (namely, the Non-linear Thin Shell Instability).

For this simulation, I put a 15% velocity perturbation on both of the colliding flows. The average velocity and the continuously injected velocity from the top and bottom domain boundaries is 60 km/s. As a reminder, here are all the relevant parameters:

nslab = 1e-5 g/cc
tslab = 10 eV
vslab = 60 km/s (initially +/- perturbation)

namb = 1e-8 g/cc
tamb = 0.1 eV

runtime = 100 ns
domain size = 2 cm x 2 cm
resolution = 64 x 64 cells + 3 AMR levels

The ambient is sufficiently low density and low temperature to play the role of a vacuum.

Note that in the following image/movie, the legend is constantly changing. This is because the cooling keeps increasing the maximum density and I wanted to always have the densest regions in red. The densities are in computation units (so in units of 1e-5 g/cc).

movie

Once you enter the highly non-linear regime of the NTSI, you get filaments of high density scattered throughout the cooling region. Below is temperature in eV.

movie

And just for comparison, here is the density image/movie for a simulation without cooling. Due to the shape of the perturbation, you get an interesting pattern, but you can see that it does not have the NTSI. Thus, so there is no high density fragmentation.

movie

Testing planetary mass with the outflow object

The linked gifs are for outflows with the following parameters: = 100, = 0.001, outflow radius of 10, outflow thickness of 3, = 200, = 0.0001, = 2, = 0.2, and the mass is varied from 1 Jupiter mass (first two plots) to 0.001 (second two plots). In the first case, since gravity is very strong, infall forms a column of superheated ambient gas (inaccurately dubbed the 'laser planet' — easier to remember than 'superheated plasma planet'). In the second case, gravity is weak enough that the outflow is unimpeded, and is symmetric.

The second case suffers from some artifacting with respect to the outflow boundary.

1 Jupiter mass planet, T/v plot

1 Jupiter mass, rho/v plot

0.001 Jupiter mass T/v plot

0.001 Jupiter mass rho/v plot

2-D test run for Mach stem project

I think I found an excellent regime for my Mach stem runs.

density contrast X = 500
shock strength M = 5
run time t = 25 years

With 2 clumps: one stationary and one moving at 10 km/s. For the image/movie below, I used gamma = 1.01 and a horizontal clump separation of 4 rclump (center-to-center). The effective resolution is 80 cells/rclump.

movie

The run only took about 12 minutes on 64 cores. It will take longer to run with cooling and in 3-D, but it should be very doable at a decent resolution.

Primary star frame system .VS. COM frame system

EVERYTHING IS ABOUT "AS IF"

If we locate the origin at the primary star. The Newton's law is:

is using instead of

is the distance from primary to COM

is the distance from secondary to COM

If we locate the origin at COM.

So the newton's law does not really change if we just use

for the fluid mechanics part, it explicitly has the following relation:

and is known - doing circular motion

Therefore, it is the mapping from COM frame to primary frame that is complicated. As long as we always work in the same frame, we are fine.

NOTE: All quantities with com subscript are known.

The trick here is to use in kinematic equation but use in fluid equations and newton's law. Thing will get more complicated if secondary accrete gas - need to update reduced mass every time step.

Actually you can view every parcel of gas as a body, and it will has its own reduced mass with respect to either star which is approximately its own mass. That justify why we don't need to change the density. If we have to change the density, serious things will happen - EOS changed and that will affect the dynamical picture.

Code Meeting 10/07/2014 - Eddie

HLLC Solver

  • The solver passed all 5 tests from Ch. 6 in Toro. The test parameters are given on my wiki page: u/ehansen/buildcode
  • I tried adding transverse velocity (vy) to see if it would affect the solution to the Riemann problem. I used test 4 which is a shock/contact/shock test.
  • I tried running my clump simulation with the clump and ambient/wind moving in the x-direction instead of the y-direction as before. This would test if the sweep method is causing asymmetry. If we find that the sweep method is the culprit, it would still be strange for the HLLC solver to produce more asymmetry than the Exact solver. Perhaps it is the combination of the sweep method with HLLC that is bad.

movie


Pulsed Jet Restarts

  • The hydro run finished, but it took almost 35 hours on 256 procs. I know that my original runs used more procs, so they ran in less time, but this still seems a bit high to me.
    • I found 18 restarts due to High CFL, which seems higher than normal.
    • How much slower should the code run because of the SII, SIII, and SIV tracers?
  • For the MHD runs, there are a lot of restarts that will make it impossible to finish. I tried the beta=5 model, and I only got 20 out of 100 frames in 48 hours on 256 procs. The estimated wall time remaining got up to about 1 month.
    • These are 'restarts due to nan in flux', which do not occur during the hydro run.
    • I attached a list of restarts that show position, time, CFL, maxspeed,etc. It seems like most of them occur at the head of the jet, but it's a bit hard to tell without overplotting things, and I didn't have enough time to figure out how to to that.

movie

Angular dependence of outflow temperature

In the vein of Stone and Proga (2009), we've made a first cut at implementing an angular dependence for heating. The heating rate, from S&P's equation (6): where at the line connecting the centers of mass of the star and the planet. , , , stellar wind velocity = 10 (not exactly in line with S&P's second case). Further testing is in progress.

attachment:2D_test.gif

Meeting Update 10/06/2014 - Eddie

  • Implemented sulfur tracking: ehansen10042014
    • new 2.5-D jet runs: hydro run has finished, working on MHD runs now
  • testing 2-D clumps with different parameters for 3-D Mach Stem runs
  • testing HLLC solver in 1-D
    • using tests straight from Toro which I used 3 years ago on my own hydro code: u/ehansen/buildcode

Meeting Update 10/06/2014

-Included the extrapolative BC's on both sides in 2D.

http://www.pas.rochester.edu/~mehr/wikistuff/movie.gif

-Comparison with S&P results.

Tracking Ionized Sulfur Species

See ticket #433 for a brief description.

I found recombination rates from Nahar 1996. There is a nice table with many more species that could potentially be used in the future.

Ionization rates were a bit more complicated. I used Arnaud & Rothenflug 1985. They give some complicated formulas for the rates of several species. Again, the other species could be implemented in the future. I wrote a separate fortran program which uses these formulas to generate a table of ionization rates. The generated table is in a format that is compatible with astrobear's table and chemistry routines.

I tested my program with H ionization, and I was able to recover the table that we currently use in astrobear. This program is so useful that I'm going to check it into the main repo so that we can use it in the future for other species.

Here are the ionization rates for SII and SIII. The x-axis is log(T) and the y-axis is the rate in units of 10-9cm3s-1.

These follow a trend that is common to ionization rates, and it makes sense that SIII peaks at a higher T than SII. The peaks don't seem that far apart because the ionization potentials (23.4 and 35 eV) are pretty close in log(T) space. Below are the recombination rates for SIV and SIII. Same units as above except the rates are in log space now.

These were straight from a table, no calculations necessary which is quite convenient. Makes sense that SIV recombination rates are a bit higher since it should be harder to ionize SIIII / easier to recombine SIV. This is consistent with the ionization rates as well.


Jets

I tested the implementation with a low resolution, non-MHD, pulsed jet. Below are plots of SII, SIII, and SIV density (from left to right in computational units).

The plot makes sense qualitatively. SII peaks inside the clumps, aka cooling regions. SIII peaks at the edges of clumps and on the wings, aka shocks. There seems to only be SIV on the spur shocks where it is hot enough. I did find a typo in the SIV recombination table which was extremely high at one data point, so there's a chance that the SIV density should be slightly higher but I think it still looks good. We didn't expect much SIV anyways.

Despite that faulty data point, I think it is fair to say that sulfur tracking is working properly. The emission maps should not have changed much, since all I had to do was multiply the output by SII/Stot where Stot = SII + SIII + SIV. The emission maps really need higher resolution to be accurate, so there's no sense in looking at the ones from this run. I am going to set up production runs now.

RT Instability

Two simulations have the same following parameters:

Temperature=20K

Density varies as:

Same wind launching radius (1 AU) and resolution (320*320 base grid of one quarter plane with 3 level of AMR)

Wavenumber: 40 waves of perturbation in 2 radian

Density of Perturbation: Both are 2.5% of the peak the density.

Two simulations only have tiny difference in velocity:

[[Image(lowvelocity0200.png,width=400)]][[Image(highvelocity0200.png,width=400)]]

lowvelocity.gifhighvelocity.gif

Clip operator

Clipping a cylinder and rotating it allows me to isolate the collision region in visit to take some sums of the mass contained there and compare with critical mass

Playing with contours and streamlines on low res data

Note to self: sessions are saved in my home directory on BH

CDM and field plot for poster

Here is what projection of beta looks like (this is made by taking the projection of thermal pressure over B pressure - ½ B2),

Here is a projection inv beta,

I like working with beta better so in below plots that is what I am using.

The idea is - next to the CDM's on the poster, we will have a plot of the field. Here are some possibilities:

2 pseudos overlaid on top of each other, one that is N2 and 1 that is beta. The CDM of n2 is in grays underneath of the hot color plot of beta:

Beta with density contours on top:

I think I like the contour plot better — the overlaid psuedos are potentially misleading..