Posts for the month of May 2014

Colliding Alumin(i)um Plasmas

The point of these simulations is to study cooling instabilities that may be present in Francisco's experiments of colliding jets. To simplify the problem, the simulations are just 2-D colliding flows with the appropriate Al cooling. The initial parameters are as follows:

T = 10 eV

rho = 10 ug/cc

v = 60 km/s

Z = 6 (ne/ni the electron to ion number density ratio)

The grid is 2 x 2 cm and 64 x 64 cells with 6 levels of AMR. The total runtime is 100 ns.

In the experiments, the "ambient" is a vacuum. To get around this in astrobear, I initialized the flows such that they already fill the domain. So the flows immediately collide at the beginning of the simulation, and they continue to flow in from the top and bottom boundaries. The left and right boundary conditions are periodic.

Below are images and movies of the temperature in eV, and also the density in ug/cc. They are enlarged to look at the region of interest.

movie

movie

Meeting Update 05/27/2014 - Eddie

Science

  • The new high resolution 2.5-D pulsed jets (5 models) are running on bluehive2 and stampede.
  • Wrote a problem module to study cooling instabilities in Francisco's colliding jets experiments. Now working on implementing an aluminum cooling table.
  • Need to analyze Mach stem simulation data, and possibly do some more runs to write HEDLA proceedings.

Code

  • Made a quick fix to the code (ticket #401)
  • Need to look at Martin's jet/outflow module and combine my work with his.
  • Julio should have the code now. I will give him some time to get set up and play around with it, and then I will contact him at the end of the week to see where he is at.

Outflow-wind interactions

Meeting update

Talked with Jonathan on tasks and threading, will need to adjust the slurm script on Bluestreak to use threading

Made tickets for documentation, assigned each group member a ticket to begin working on — what are our names on the wiki?

Changed due dates for development tickets

Read a lot of colliding flows papers last week, am putting together an email for Fabian to meet next week.

Hydro Mach 10 shock TSF with linear velocity profile wind

See here:

http://www.pas.rochester.edu/~shuleli/tsf_paper/images/velpro1_0090.png

http://www.pas.rochester.edu/~shuleli/tsf_paper/velpro1.gif

The wind is initially Mach 10, it continues for 0.2 mln yrs then linearly slow down to 0 for another 0.2 mln yrs.

MHD simulations of the circumnuclear disk - Marvin

I have now finished a simulation that contains all the relevant physics, including cooling, MHD and the outflow object.

But first I want to recall two of my previous simulations, which do not include MHD: The first animation shows the surface density of the accretion disk's inner region. The inner black circle marks the outflow object, the outer back circle marks the initial inner rim of the accretion disk. We see that, although no physical viscosity is present and the outflow is interacting with the disk, material is accreted, i.e. the accretion disk's inner rim moves inwards until it reaches the outflow object.

The second animation shows the same but without outflow object, this is just to show that the accretion disk's inner rim finds a stable configuration and the gas inside the inner cavity has a density of about 1000 cm-3, about 30 times lower than the accretion disk's density.

The third animation now includes MHD and the outflow object. The disk has a toroidal initial field configuration with an initial field strength of 1 mG. There are still clumps and streams of matter forming and moving inwards, but besides these features the inner rim seems to be stable with densities of the inner cavity of about 1000 cm-3 as seen in the second animation. So magnetic fields seem to play an important role in forming the inner cavity of the galactic center's accretion disk.

The fourth animation shows the corresponding face-on magnetic field strength.

First animation: surface density, without magn. field

Second animation: surface density, without magn. field and without outflow

Third animation: surface density, with magn. field and with outflow

Fourth animation: face-on magnetic field strength in Gauss

This leads to two important questions:

  1. Why is the disk collapsing when the outflow is switched on?
  2. Why and how do magnetic fields prevent this?

Some time ago we discussed a simple model for the extraction of angular momentum from the inner accretion disk:

Lets assume the accretion disk has an inner rim , and the inner part of the accretion disk (a ring with mass M) fully interacts with the outflow. The outflow has a massflow , so after a time the mass has been added to the ring. That means that the mass of the ring increases, but its angular momentum does not change because the wind does not have any angular momentum. So the specitic angular momentum is decreasing, leading to the accretion of the disk's material. However, there must be a kind of critical outflow rate, because although the wind does not add any angular momentum to the disk, it does add radial momentum to the disk, and at some point the radial momentum will just win against the loss of angular momentum. To test this simple model I did the following very rough estimate: Both, the ring with mass and the wind with mass , have a specific potential energy and a specific kinetic energy (the wind due to its radial momentum and the ring due to its azimuthal velocity ). If we assume perfect mixing of the ring with the wind material, we can calculate the total energy of the "new" ring (with mass ), which is just the sum of the aforementioned:

When we further assume that the "new" ring finds a new orbital configuration (at radius ) we can use the virial theorem, where

to calculate its new orbit:

So if the wind velocity is smaller than the azimuthal velocity we would expect an inflow, otherwise an outflow of disk material. In the GC we have , , , so we would always expect that the disk is pushed away from the central black hole, contrary to what we see in the simulations. So apparently this simple model does not account for the angular momentum loss.

I furthermore made a rough estimate for the code performance on XSEDE's gordon cluster, as shown in the following figure:

Update 2: Some Column Density Maps

Here are some plots for the CollidingFlows shear 15 data (beta 1, 10: x, y, z fields, and Hydro). Each of these are column density maps for each .bov file along their respective axes.

Beta 1, Shear 15

GIF

GIF

GIF

Beta 10, Shear 15

GIF

GIF

GIF

Hydro, Shear 15

GIF

Did the best I could to orient them appropriately without the triade VisIt produces (which is misleading as it does not indicate the actual direction of the flows). The min/max is kept the same for each plot.

Meeting Update 05/19/2014 -- Baowei

  • Science
    1. 2D Ablative RT: tried lower tolerance ( according to blog:johannjc05132014) with max iteration 10000, Maximum # of iteration reached and negative temperature found at frame 316 — comparing 356 frame with tolerance and max iteration 1000.
Rho
Temperature
  1. Help Rui set up on bluehive2. He's analyzing the front rate using his code
  2. 3D Ablative RT: still working on transfering the hdf4 data from Rui to text form.

First Post (Update 1): Some Colliding Flows

Hi!~ so here is a link to my page http://astrobear.pas.rochester.edu/trac/wiki/u/madams

Previously I had experimented with making visualizations with Erica's CollidingFlows data on Bamboo during the semester. I created some slices with the Shear 15 data for Beta 1, 10 and Hydro. The most notable effect is how the magnetic field confines the fluid the higher its strength. This is expected, but as I post more in the next few days it'll become more clear for these data sets.

Here I will post some density (rho) plots pertaining to the Beta 1, Shear 15 set with a magnetic field in the x-direction. Each of these movies are slices in the yz, yx, and xz plane.

min: 0.65, max: 250

GIF

GIF

GIF

Notes:

  • Attempted to make a three slice visualization with these slices. Visit started crashing on hay. Need to talk to Rich sometime soon about Visit issues.

Objectives:

  • Visualizations for density (Beta 10, and Hydro).
  • Make some velocity plots for these three data sets (Beta 1, 10 and Hydro).

Meeting update - 5/19/14

BE paper

  • Got all the proofs back to ApJ
  • Sent you the form for funding the paper. Not sure if this is needed before they publish?

MHD Shear Flows

  • Need to decide on field orientation for the 'perpendicular' case. The Beta=1 effects for a uniform field in y or z are too strong (see below). Talking with Jonathan on this, it seems a highly unrealistic initial field orientation to begin with — how do you have this large scale perpendicular field and a tilted interface to begin with? If there was some large scale perpendicular field, then by the time the flows collided, they would have no tilted interface. A more realistic perp. case might be a helical field inside of the colliding flows only (no ambient field)? This would simulate a perpendicular field, but without concerning the broken symmetry played by the tilted interface. For now though, am running 3D low res sims of the y and z fields with a weaker field.
  • The data set for these runs as they are now, is extremely large. I am at ~ 120 GB now, ¼ of the way through the production Shear15, Beta=1, x-field run. This is for ~ 15003 cell resolution (48 + 5 levels). I estimate each run will be ~ 1TB in size. Where to store all this data?
  • Stuff still need to finalize for production runs: 1) Potentially increasing the box size to prevent back-flow into the outflow-boundary. This most likely will be a problem in the Shear 30 and 60 cases (see my page). Can do this either by adding code to enable a restart with a bigger box once the back-flow hits the wall, or just run sim with a longer box in x to start. 2) Decide on parameters for the flow, i.e. the mach, ramp down times, etc. for production.
  • Runs possible: Beta 1/10, 3 shear angles, 2 field orientations = 12 runs..

High Res, Shear 15, Bx, Beta = 1

  • ¼ way through on BS — no nans, but did see source errors

Low Res, Different Field Strengths/Orientations

Here is my wiki page on this: http://astrobear.pas.rochester.edu/trac/wiki/u/erica/LowResMHDShearFlows

And last week's blog post talked about what I was seeing in the Beta=1 case: http://astrobear.pas.rochester.edu/trac/blog/erica05122014

Hydro

Bx

Beta10, Beta1

moviemovie

By

Beta 10, Beta1

moviemovie

Bz

Beta 10, Beta 1

moviemovie

Column Densities

Beta 10, Beta 1

Made preparation to do research in China

I installed visit on my laptop and tuned to be able to use visit on alfalfa remotely.

Plan to modify the outflow module. Since as the radiation force changes, velocity and density profile changes dramatically within the sonic radius and sonic radius also moves. So I probably need to solve this equation and plug the velocity and density profile into outflow module.

Hypre tolerances

So I ran hypre with varying tolerances for the radiation solve and found that the solution only converged when I set the tolerance to be 1e-10

Movie

So I looked more closely at what the 'tolerance' actually refers to in hypre. The hypre user guide is not much help…

Convergence can be controlled by the number of iterations, as well as various tolerances such as relative residual, preconditioned residual, etc. Like all parameters, reasonable defaults are used. Users are free to change these, though care must be taken. For example, if an iterative method is used as a preconditioner for a Krylov method, a constant number of iterations is usually required.

I'm assuming that hypre is trying to solve Ax=b and instead ends up solving Ax'=b' where r stands for the residual (r=b'-b) and x' is the approximate solution to x. I'm not sure what the _C means or what the <C*b,b> represents… presumably this is the inner product of some matrix 'C' times b with b.

Iters       ||r||_C     conv.rate  ||r||_C/||b||_C
-----    ------------    ---------  ------------ 
    1    1.667442e-07    265.427358    3.581554e-08
    2    2.421703e-07    1.452347    5.201657e-08
    3    2.037404e-07    0.841310    4.376208e-08
    4    1.176484e-07    0.577442    2.527008e-08
    5    7.646604e-08    0.649954    1.642440e-08
    6    4.446094e-08    0.581447    9.549914e-09
    7    2.173844e-08    0.488933    4.669272e-09
    8    1.033716e-08    0.475525    2.220354e-09
    9    5.190075e-09    0.502079    1.114794e-09
   10    2.514604e-09    0.484502    5.401202e-10
   11    1.291694e-09    0.513677    2.774473e-10
   12    6.662719e-10    0.515812    1.431108e-10
   13    4.688375e-10    0.703673    1.007032e-10
   14    2.451621e-10    0.522915    5.265918e-11
   15    1.654131e-10    0.674709    3.552962e-11
   16    8.812279e-11    0.532744    1.892819e-11
   17    5.478943e-11    0.621740    1.176841e-11
   18    7.612170e-11    1.389350    1.635043e-11
   19    1.416387e-10    1.860687    3.042304e-11
   20    2.346052e-10    1.656364    5.039163e-11
   21    4.389153e-10    1.870868    9.427609e-11
   22    1.170443e-09    2.666672    2.514034e-10
   23    3.069051e-09    2.622127    6.592117e-10
   24    3.268667e-09    1.065042    7.020879e-10
   25    3.349935e-09    1.024863    7.195436e-10
   26    1.136404e-09    0.339232    2.440919e-10
   27    2.175246e-10    0.191415    4.672284e-11
   28    7.067671e-11    0.324914    1.518089e-11
   29    2.139085e-11    0.302658    4.594611e-12
   30    7.493659e-12    0.350321    1.609588e-12
   31    3.051552e-12    0.407218    6.554531e-13


<C*b,b>: 2.167928e+01

So I looked for an explanation of convergence for linear system solves and found this document

The condition number of a matrix is related to how much a relative error in the rhs affects the relative error in the lhs. ie

which would imply that

In general we have

and indicates how close to singular a matrix is. (ie a singular matrix would have an infinite condition number)

Also because of machine precision, there is an unavoidable error related to the condition number

AstroBEAR solveres

Now we have two solvers available in the code… One is GMres and the other is PCG (preconditioned conjugate gradient). GMres does not converge unless the tolerance is set to ~ 1e-3 - but then it does nothing. PCG apparently expects symmetric positive definite matrices - but the matrices are certainly not symmetric. Not sure how much that matters… I'm also not sure whether PCG is actually doing any preconditioning automatically. I am not setting the preconditioner etc…

Also the actual solution vector 'x' is in terms of the Radiation Energy, and not radiation temperature - and since E ~ T4, there and there is a 3 order range of temperatures - there will be a 12 order range in Energies - and the relative error in the vector norm will be dominated by the larger values… A better metric would be the maximum local relative error…

Meeting update

MHD Shear Flows

  • Need to decide on field orientation for the 'perpendicular' case. The Beta=1 effects for a uniform field in y or z are too strong (see below). Talking with Jonathan on this, it seems a highly unrealistic initial field orientation to begin with — how do you have this large scale perpendicular field and a tilted interface to begin with? If there was some large scale perpendicular field, then by the time the flows collided, they would have no tilted interface. A more realistic perp. case might be a helical field inside of the colliding flows only (no ambient field)? This would simulate a perpendicular field, but without concerning the broken symmetry played by the tilted interface. For now though, am running 3D low res sims of the y and z fields with a weaker field.
  • The data set for these runs as they are now, is extremely large. I am at ~ 120 GB now, ¼ of the way through the production Shear15, Beta=1, x-field run. This is for ~ 15003 cell resolution (48 + 5 levels). I estimate each run will be ~ 1TB in size. Where to store all this data?
  • Stuff still need to finalize for production runs: 1) Potentially increasing the box size to prevent back-flow into the outflow-boundary. This most likely will be a problem in the Shear 30 and 60 cases (see my page). Can do this either by adding code to enable a restart with a bigger box once the back-flow hits the wall, or just run sim with a longer box in x to start. 2) Decide on parameters for the flow, i.e. the mach, ramp down times, etc. for production.

3D High Res Colliding Flows

I ran a Beta = 1, Shear = 15, field = parallel case on BS, 8000 cores. It got ~¼ of the way through. It ran into some bad cfl numbers (at first, finite, then cfl=infinity) and src errors, requested restarts, and then made a few more frames until it died with the error:

"hyperbolic/sweep/stencil_control.cpp.f90, line 353: 1525-108 Error encountered while attempting to allocate a data object. The program will stop."

I'm attaching log files from machine to this blog post.

The global info allocations are at 300 GB at frame 57. The Info allocations didn't reach that size until frame 160 in the Hydro run on BS.

I'm curious how the src errors are mitigated

I also am curious about this - I saw that on BH (~64 cores), the info allocations in the standard out was 10x larger than the size of the chombo. This we talked about before as being related to not counting ghost zones. When I moved to BS, the difference was a factor of 100. Is this likely ghost zones again, this time the larger factor being due to the many more smaller patches being distributed over 8,000 cores?

Here are density and mesh images over time:

Everything looks good..

3D Low Res Perpendicular Field Cases

I ran both the y-field and z-field cases. This is a uniform field that runs through the ambient object and the CF object, perpendicular to the CFs. Here is a movie of the y-field case. This is density with the B vector field. Note the flow is colliding in x. Beta = 1.

We see that the oncoming flows cause the field to begin bending strongly, especially near the x faces of the box. The incoming flow is then directed along these field lines, squashing the material into a pancake in the xz plane. The field is pushed together by the flow, and piles up strongly in the collision region, where the magnetic pressure becomes so strong it vacates/resists gas from collecting there. Hence, the vacated cavity in the collision region, traced out by billowing shell of material traveling in y, away from the colliding flows.

The field is so strong in this case, that no interesting features of the flow are coming through. The NTSI for example is completely wiped out.

movie

I also made column density maps of this case and the 3D low res z-field case and posted them on my page here.

I am running these same simulations now with Beta = 10 and with extrapolating/outflow only boundary conditions on all faces of the box, since the problem with the Bfields we saw before only happens when the Bfields are normal to the outflow boundary. The cases above had reflecting, Bnormal boundaries where the field was coming from, but in trying to avoid the bounce off of the boundary, changing BC's now too. Directories:

/bamboodata/erica/CollidingFlows/CollidingFlows/MHD_3D/Beta10/Shear15/

They are ~¼ way through on bamboo and grass and take ~ 20 hours to complete.

Science Meeting Update 05/12/14 -- Baowei

*Ablative RT

  1. 2D with 200 (10%) extended ghost zones: bubble appears late comparing 5 extended ghost zones.
Rho
T
Vy
  1. 3D: working on the 3D data from Rui

new resolution numbers for 2.5D pulsed jet paper

Now that I fixed that typo in the cooling routines, the non-equilibrium cooling will be a bit stronger. This is good news for the 2.5-D sims as long as I am still able to resolve the cooling length, and it will make my resolution more impressive when compared to other papers. I redid the cooling length calculations and updated the following table in my paper:

This will, however, be bad news for the 3-D sims. We've already seen that it is very difficult to get high resolution on these runs. The shorter cooling length means we will be that much less resolved. We may have to reconsider the problem set up, and do something that the code can handle more easily.

1D Radiation sub-cycling

New time stepping that is based on actual time scale for energy to change.

Movie

test

science meeting update 05/05/14 -- Baowei

  • Ablative RT
    1. Rui is running 2D RT on LLE machines. He will check the growth rate with his matlab program when getting the data (Find the front by checking the slope and subtract the speed of the whole body) Will send me the 3D results and let me try the 3D.
    2. Still working on the hypre chocking issue. http://astrobear.pas.rochester.edu/trac/blog/bliu05012014

Meeting update

"In 3D, the vector potential should be averaged along each edge of the component parallel to that edge. For example Ax should be averaged along each edge that is parallel to the x-axis. The 2nd order errors due to estimating the average value along an edge by the midpoint will not produce divergence in the resulting B-field." For what field geometry can one just set the B fields directly?

Where does maintainauxarrays get set? There is a bit on it in amr/amr_control.cpp.f90:

But it seems can set this just in the objects..

2D Ablative RT

Maximum subcycling number is reached at around frame 278 (diffcfl=0.04 and Subcycling=100). Here's the plots along Lx/2. Rho and T on the top could be tiny but all positive. Vy plot stays stable until around frame 270…

Rho
T
Vx
Vy