Posts for the month of October 2013

Binary simulation

This is a very long time sim. It was terminated around 20 cu. It shows a bug in program because bluehive indicated an unexpected termination which has things to do with pointers.

It evolved to the picture just before the common envelop stage.

infall.gif

Binary simulation

With Baowei's help, I finally made the Binary module run. This is a basic simulation. There is no fall back steady disk but BH accretion. I will adjust the wind velocity and prolong the simulation time to see what happens.

Update 10/28

With a lot of Baowei's help, managed to figure out why the spherical wind module was not working for me.

  • The hydro runs are completed for that and the results are on the PN page.
  • Updated all of the MHD runs with the new color bars/schemes

Meeting Update 10/28/2013 - Eddie

  • Getting a reservation on bluestreak to do the beta = 1 3D pulsed jet.
  • Looked back at the mach stem project I had started last year, and reran an updated version of the code…

movie

This is one stationary clump, and I have reflected the data to make the image look like the physical situation. Now I want to simulate two clumps at the same time and give on of them a velocity.

Meeting update

Registered for Fire Down Below: The Impact of Feedback on Star and Galaxy Formation conference for April - http://www.kitp.ucsb.edu/activities/dbdetails?acro=stars-c14

Need to still apply to present a poster

Have some extra things to add to the Hypre Appendix - like the stencil, matrix equations, how we incorporate Hypre into AstroBEAR and the types of BC's available with the elliptic solver.

Been reading on outflows and colliding flows, namely Plunkett and Jonathan's papers so far.

Meeting Update 10/28/2013 Zhuo

I have also run the basic disk and binary modules. The binary module was not working while the disk is good.

Was reading the objects and modules in astrobear since there is no module simulate the fall back disk I may have to write a new one. The objects and modules are complicated. I think the module that has the most resemblance to the fall back disk is corotating-binary and basic-disk, but neither of them simulate the global picture of binary star.

Meeting Update 10/28/2013 -- Baowei

  • Tickets
    1. new: #310 (Implement Ionization Table to AstroBEAR)
    2. closed: none
  • Worked on
    1. Conduction Front test: #309
    2. EOS: #310

Half-Plane Simulations

This is the Basic-Disk-Example in 3D, but with 4 refinement levels and with a simulation time t=1.

At first the disks show strong variations, maybe it first has to relax from its initial conditions.

The top figure show the simulation of the full disk, the bottom figure show the simulation of only the upper half-plane of the disk, saving half of the computational time. Reflecting boundary conditions were applied at the disk plane.

As we can see, both simulations yield the same results.

On my page you can see the corresponding face-on views.


Movie: full simulation


Movie: Half-plane simulation

Simulation of the Circumnuclear Disk

Parameters and scales:

timescale: 1e7 yr

number density scale: 1 ccm

selfgravitation and MHD are switched off

central mass: 4e6 M_sun

the center contains an outflow object with radius 0.4 pc, density 1e4 ccm and velocity 10 km/s

The inner rim of the disk first develops a 'wedge'-like shape due to the outflow, then two knobs delelop that are carried away by the outflow. Later on the outflows seems to be deflected by the inner rim.


Movie

Federrath and Krumholtz accretion temperature

some new images and bondi radius

http://www.pas.rochester.edu/~shuleli/tsf_feed

The explosion happens at frames 36 ~ 38. If we take an estimate of temperature surrounding the star around that time (< 200K), and the stellar mass at the same time (about 0.82 solar mass), we get a Bondi radius of about 868 au (0.075 on the grid of the plot), which corresponds to 38 grid points.

Momentum Conserving Self Gravity

Making the momentum be conserved requires recasting the gravitational force as a total differential

so we need to find such that

Since

we can substitute for and we have

which is equivalent (in 1D) to

where we can identify the equivalent momentum flux tensor as

  • In more than 1D we have

or we can write it as

Momentum Conserving Self Gravity Source Terms in AstroBEAR

  • q - initial state
  • qLx - left edge of x - interface in predictor
  • qLy - lower edge of y - interface in predictor
  • qRy - upper edge of y - interface in predictor
  • q2 - intermediate cell centered quantities
  • q2Lx - updated left edge of x - interface used for final riemann solve
  • fx - predictor x flux
  • f2x - final x flux
  • q3 - new values

Normal CTU:

  • qLx = Reconstruct(q), etc… (for all 6 qxx states)
  • fx=Riemann_Solve(qLx,qRx), etc… (for all 3 edges)
  • q2 = q+fx+fy+fz
  • q2Lx=qLx+fy+fz etc… (for all 6 q2xx states)
  • f2x=Riemann_Solve(q2Lx,q2Rx) (for all 3 edges)
  • q3=q+f2x+f2y+f2z

Strang split self-gravity:

  • q = q + sg(q,phi)*hdt
  • qLx = Reconstruct(q), etc… (for all 6 qxx states)
  • fx=Riemann_Solve(qLx,qRx), etc… (for all 3 edges)
  • q2 = q+fx+fy+fz
  • q2Lx=qLx+fy+fz etc… (for all 6 q2xx states)
  • f2x=Riemann_Solve(q2Lx,q2Rx) (for all 3 edges)
  • q3=q+f2x+f2y+f2z
  • q3=q3+sq(q3,phi)*hdt

where

sg=sg_x+sg_y+sg+z and sg_x has two non-zero terms…

Momentum conserving self-gravity

  • qLx = Reconstruct(q), etc… (for all 6 qxx states)
  • qLx(px) =qLx(px)+sg_x(q,phi)*hdt
  • qLy(py) = qLy(py)+sg_y(q,phi)*hdt
  • qLy(pz) = qLy(pz)+sg_z(q,phi)*hdt
  • fx=Riemann_Solve(qLx,qRx), etc… (for all 3 edges)
  • q2 = q+fx+fy+fz + sg(q,phi)
  • q2Lx=qLx+fy+fz etc… (for all 6 q2xx states)
  • q2Lx=q2Lx+sg_y(q,phi)+sg_z(q,phi) (for all 6 q2xx states)
  • f2x=Riemann_Solve(q2Lx,q2Rx) (for all 3 edges)
  • f2x(p)=f2x(p)+SG_PFlux_x(phi) (same for y and z)
  • q3=q+f2x+f2y+f2z+SG_E(f2x(rho),f2y(rho), f2z(rho), phi)

The final updates for momentum include true 'fluxes'

SG_PFlux_x(phi) has only 1 non-zero term

and an energy update term that is derived from mass fluxes.

SG_E(f2x, f2y, f2z, phi)

For a description of an algorithm that also conserves energy see http://adsabs.harvard.edu/abs/2013NewA...19...48J

Journal Club 1022 Agenda

Meeting Update 10/21/2013 Zhuo

Basically reading book - the Riemann solver book and astrophysical fluid dynamics book. The 2D programming has some problems.

I thought about the problem of collapse of molecular cloud and hope that Virial theorem may introduce something new by replacing the energy equation in Euler equation. But found nothing significance so far.

Meeting Update 1021

Federrath Accretion with rotation, full movie:

http://www.pas.rochester.edu/~shuleli/tsf_paper/fedf.gif

Krumholz Accretion restart:

http://www.pas.rochester.edu/~shuleli/tsf_paper/krumf.gif

The snowplow phase from a supernova expansion typically lasts 2 million yrs. if we use the shell speed ush ~ t-4/3 as characteristics, we can estimate the time needed for the shell to slow down from Mach = 1.5 to acoustic wave (Mach = 1). This time period is about 0.8 million yrs. Therefore one estimation of the duration of the wind in our simulation could just be 0.8 million yrs, at the end of which both accretion routines have a disk remaining.

Meeting Update 10/21/2013 - Eddie

Pulsed Jets

Beginning of last week, I made a correction to the cooling length in my pulsed jet simulations…ehansen10162013a

I also made a 3-slice movie of density for the partial run. This is 12 frames (out of 100) of the case.

movie

I am not sure why visit made the images like this. I know it is very distracting, so I will try to fix this.


Radiative Shocks / Cooling

We still do not fully understand the discrepancy between astrobear and the Raymond shock code. It is clear that cooling is implemented differently.

After talking with Pat about cooling in astrobear vs. cooling in the Raymond code, I tried figuring out the differences analytically…ehansen10172013

Pat did not comment on my calculations, but I got the impression that this was not the correct way to compare the two different methods?


Mach Stems

Looked back at what I had started last year and some papers to refresh my memory. Will start doing some runs this week.

Meeting Update10/21/2013 -- Baowei

  • Tickets
    1. new: none
    2. closed: none
  • Resources
    1. Submitted Teragrid renewal proposal last Tuesday
  • Worked on
    1. ticket #309: tried to figure out the bug causing the different hypre matrix with the same dt_diff for subcycle 1 and 2. The bug was found and fixed in subroutine DiffusionSetBoxValues when setting BCs with ghost zones.
    2. Teragrid proposal

Meeting update

Here is the working paper figure for the pressure approximation,

As you can see, 1/10 case (light blue line - accidentally not on legend) is no longer a valid approximation with this method. This makes sense when we consider the Jeans length of the ambient — the ambient is Jeans unstable, and so the ambient in this case collapses more like a uniform sphere. When I do a similar approximation for the pressure, only using the solutions given by uniform collapse, I get this,

From this plot, we see the same value of the 1/10 case as in the previous approximation, which seems to indicate this is some boundary case between the dense and light approximations — ambients equal and denser than 1/10 are better approximated by the uniform collapse. We also see the same grouping up of the lighter cases that we do in the sparse approximation, but they are grouped up around a different value. This indicates that the trend is robust — settling of the atmosphere leads to some external pressure threshold needed to induce collapse.

Here is a wiki page on the 2 different approximation methods I used to make these plots -

http://astrobear.pas.rochester.edu/trac/astrobear/wiki/u/SurfacePressureApprox

In running out the 1/100 case longer, did not get collapse. This may have to do with the scale height for this case extending beyond the box..?

I added the sparse approximation description to the paper, and worked on finishing up the appendices on self-gravity. Here is a completed paper draft,

http://www.pas.rochester.edu/~erica/BEsphere_EK_09.20.13.pdf

Cooling: internal energy vs. enthalpy

Cooling in AstroBEAR is currently handled by removing energy from the internal energy. In the Raymond code, cooling is different, but it appears to be equivalent to removing energy from the enthalpy. I have derived the resulting pressure for both cases below.


Internal Energy Formulation (current implementation)

The internal energy is defined as

.

Let be the energy loss due to cooling. The new internal energy is

,

.

Therefore, the new pressure is

.

For an ideal gas with

.


Enthalpy Formulation (proposed change)

The enthalpy is defined as

.

After cooling, we have

,

,

.

Therefore, the new pressure is

.

For an ideal gas with

.


It appears that the enthalpy formulation would result in removing less thermal pressure, and it would therefore have a higher temperature in comparison to the internal energy formulation.

correction on cooling length resolution in pulsed jet sims

So I actually made a mistake in my cooling length calculations… I was modeling a 100 km/s shock in my 1D code to get a feel for what the cooling length in my pulsed jet sims should be. I thought this was correct since the injection velocity varies from 150 - 250 km/s. However, the maximum shock velocity is actually half of that, so 50 km/s.

By comparison, a 50 km/s shock will have a smaller cooling length, so this is actually good news. It means that my cooling length resolution is better than I thought.

Below is an updated table of the resolutions used from other papers.

Paper Base Grid AMR Physical Size [AU] Jet Radius [AU] Lcool [AU] Cells/AU Cells/Rj Cells/Lcool
Mine (2013) 42 x 420 7 2005 x 20054 334 7.29 2.68 896 19.53
de Colle (2006) 180 x 1800 0 2005 x 20054 334 7.29 0.09 30 0.66
Raga (2007) 16 x 64 10 1671 x 6685 134 10 9.80 1314 98.05
Tesileanu (2012) 128 x 384 6 400 x 1200 20 0.46 40.96 819.2 18.81

This also has implications for the 3D runs that we want to do. The extremely high cpu hour estimates are for higher cooling length resolutions which we may not need. Below is an updated table of the final results I had previously posted.

Base Grid Levels Cells/Lcool SUs (millions)
84 x 420 x 84 7 19.53 645.0
84 x 420 x 84 6 9.77 116.6
84 x 420 x 84 5 4.88 19.79
21 x 105 x 21 7 4.88 19.79
21 x 105 x 21 6 2.44 3.028
21 x 105 x 21 5 1.22 0.424

I tried a run on bluestreak with the parameters of the lowest resolution listed in the table above. I ran the model and got 12 frames in about 18.55 hours on 4096 cores. That is approximately 76,000 SUs.

1D shock comparisons

update

For a planar HSE atmosphere, with differential equation,

,

the solution is an exponential with given scale height,

where

Assuming the force is given by,

and Matm is the mass of the accumulated atmosphere, gravitationally attracted to the BE sphere through freefall,

(where rff is found by inverting the tff for r for 2 point masses)we get the expression for P = F/A at the surface (after plugging in the variables),

Using this expression gives the following results,

Vertical lines indicate when a sink formed for the corresponding run. The horizontal line is the initial external pressure on the BE sphere, which since the sphere is a critical BE sphere, equals the critical pressure.

We see that as eta = rho(Rbe)/rho(amb) increases (i.e. the ambient gets sparser), it takes longer for the pressure to grow at the surface of the sphere, inducing collapse.

Plotting denser ambient mediums skews the graph, as they increase much more rapidly on the y-axis.

Here is a table of the runs summary,

Things for me to consider:

Is this an adequate description of the problem, does it give good results?

How does box size come into play?

Should we do a different approximation for the dense cases?

Does this explain the qualitative results in the table?

Journal Club 10/15 Agenda

  • Discuss the conduction front test: ticket #309
  • Erica's BE sphere model

Meeting Update 1014

Meeting Update 10/14/2013 Zhuo

I took a rest last week because I was not in good condition. I was still thinking about the 2D problem. I will finish the programming this week.

Meeting Update 10/14/2013 -- Baowei

  • Tickets
    1. new: none
    2. closed: #304(Problem keeping hydro static equilibrium with sink particle), #307(BE module bug)
  • Resources
    1. Teragrid renewal proposal due tomorrow
  • Worked on
    1. ticket #309: fixed a bug in subcycling boundary conditions— the hypre chocking issue is solved and results are better comparing the Analytic. May still have problems with the boundary conditions
    2. proposal and progress report

Journal Club 1008 Agenda

Topics this week:

  • More discussion on the physical conduction front test.

http://astrobear.pas.rochester.edu/trac/astrobear/ticket/309

  • Erica's BE sphere model
  • Adam's paper review

Meeting Update

  • Worked on scaling implementation
  • Still testing merged version of code
  • Working on a hydrostatic Jupiter… Getting funny behavior in ambient - but density contrast is extremely high…

Setting Scales in AstroBEAR

I added the ability to set the TimeScale in physics.data, however this requires more careful testing of which combination of parameters can be set…

There are some constraints between scales… ie

pScale=nScale*TempScale*Boltzmann

rScale=nScale*Xmu*amu

lScale/TimeScale=sqrt(pScale/rScale)

and so on…

There are currently 7 scales that can be set - and three equations that constrain them.

Xmu, rScale, nScale, TempScale, TimeScale, pScale, and lScale

Therefore there can be 4 parameters set in physics.data - though not certain combinations. To test this we can construct a linear system in the log of each parameter…

log(pScale) - log(nScale) - log(TempScale) = log(Boltzmann)

log(TimeScale) = log(1d10)…

Which would give a 7x7 linear system that can potentially be solved. If the determinant is zero, then the 4 parameters that were set by the user are not independent. If they are - then the matrix can be inverted and solved for the remaining parameters.

Meeting Update 10/07/2013 Zhuo

Read chapter 16 of Toro

Meeting Update 10/07/2013 - Eddie

Not much to report this week. Just working on the paper: added a lot of text and references, and making figures (ehansen09292013)

Meeting update

I have been working through details of the derivation we discussed the other day. Please see my previous post on the pressure expression and my concern there.

In the dense ambient case limit, we talked about using a spherical wave that has had time enough to fall onto the BE sphere via freefall. For this, we considered Vff=Sqrt(GM/r) (which may be off by a factor of root 2), and said v=r/t, r=vt.

I think this may be better given by the following.

Imagine the following situation, m falling down to mass Mr from initial velocity = 0. For the case of spherical collapse, m is the mass of a thin shell, attracted to all the mass Mr within the shell by Gauss's law. None of the shells cross, so Mr is constant over the collapse.

By energy conservation, Ei = Ef, or

where r0 is the initial radius material is falling from, rf is the final radius (for our case, rf=Rbe). Solving for v, we have,

For the case rf << r0, the 1/rf term dominates, and this gives the expression,

The flow near the core, at late times of spherical collapse approaches this value (in both the LP solutions and the BE collapse solns).

In looking for R(t), maybe we should write it this way?,

i.e. the velocity of the material at a distance r0 above Rbe is the freefall velocity. Rearranging and solving for r0(t) gives,

which is to say, everything within this radius had had time enough to fall onto the BE sphere..

SESAME Table subroutines for AstroBEAR

S2GETI, S2EOSI - These routines are to be incorporated into a hydro code which uses the inverted form of an equation of state. Density and internal energy are the independent variables, and pressure and temperature are the dependent variables.

  1. S2GETI is used to get data from the library
CALL S2GETI (IR, IDS2, TBLS, LCNT, LU, IFL)

IR material region number

IDS2 Sesame material number

TBLS name of array designated for storage of tables

LCNT current word in array TBLS

LU unit number for library

IFL error flag

  1. Subroutine S2EOSI is used to compute an EOS point. That is, it computes the pressure, temperature, and their derivatives for a given density and internal energy.
CALL S2EOSI (IR, TBLS, R, E, P, T)

IR material region number

TBLS name of array which contains the EOS tables

R density in Mg/m3

E internal energy in MJ/kg

P, T pressure, temperature vectors

P(1), T(1) pressure in GPa, temperature in Kelvins,

P(2), T(2) density derivatives, (P/r)E, (T/r)E

P(3), T(3) energy derivatives, (P/E)r, (T/E)r.

For certain materials, the library also has tables of the pressure, temperature, density, and internal energy along the vapor-liquid coexistence curve. This information is needed in reactor safety problems. Routines S2GET and S2GETI can be modified to access the coexistence data, and routine LA401A can be used to compute the thermodynamic quantities.

Meeting Update 1007

TSF project:

Mach = 3 Movie: http://www.pas.rochester.edu/~shuleli/tsf_paper/mach3final.gif

Zoom in:
http://www.pas.rochester.edu/~shuleli/tsf_paper/mach3finalzoom.png

Federath accretion, Mach = 1.5, Parallel rotation:

http://www.pas.rochester.edu/~shuleli/tsf_paper/fedrot.gif

We should be able to run the Federath sim to the end in this week.

Also started MHD sims on bluestreak. The first run is Krumholz Mach = 1.5, no rotation, beta = 4.

Meeting Update 10/07/2013 -- Baowei

  • Tickets
    1. New: #309 (Test Ablative RT module)
    2. Closed: none
  • Users
    1. new one from University of Kiel asking for the code:" I want to simulate the circumnuclear disk around SgrA*"
  • Resources
    1. working with Dave backing up the wiki from Botwin to Clover
    2. Teragrid progress report: due next Tuesday

Analytical expressions for BE paper

Been thinking about the equations we were deriving the other day, and their origin.

Starting from the Navier Stokes equation,

We can solve for P in hydrostatic equilibrium (dv/dt=0), assuming P=P(z). A volume integral of the above equation then gives,

or

This says the Pressure at some depth is a function of the instantaneous weight of all the material on top of it.

From here, it seems we were thinking of extrapolating this equation now for the dynamic case,

where the bounds went from Rbe (radius of BE sphere) to Rbe + R(t), where R(t) depends on whether we are in either a subsonic or supersonic regime. (This integral would actually be in spherical coordinates, left in cartesian for simplicity).

Does this seem okay to people still?? I am not entirely convinced, and am having a hard time finding any discussion of the physics here. If it is okay, I can try to crank out some numbers for the expected P(Rbe,t), although I am not sure what function I would use for rho(r,t). Thanks

Update

Hypotheses,

  1. As the box becomes more and more jeans stable (i.e., the density is low enough that the major forces acting on the particles in the ambient is the gravitational acceleration by the sphere - i.e. Bondi flow), for a fixed volume of the ambient the sphere will go unstable on a timescale given by the bondi flow (such that an appreciable amount of matter has been accreted by sphere). In order to test this, need a time scale for which the sphere goes unstable (how to define this?). Then, compare to simulations. For this limit, the amount of mass in the box is also a strong factor in stability of the sphere, so varying the box volume changes the stability of the sphere.
  1. As the box becomes more jeans unstable (i.e., density is high enough that the box is well approximated by a uniform sphere and collapses as such), the amount of mass in the box, over some length scale, becomes less important given the ambient collapses on its own. This case leads to strong compression waves on the sphere.

Questions to address

What is the appropriate 'collapse time' of a BE sphere? A's - 1) time it takes for the BP case to collapse to sink, 2) Average free-fall time, 3) dynamical time

Is the collapse time a constant? That is, once the BE sphere goes unstable, does it always collapse in the same amount of time?

Movies of pressures at BE boundary for different limiting cases

Matched

1/10

1/10, bigger (looks the same)

Stability, bigger

1/50

1/50, longer

1/50, bigger

Also, attached is a spreadsheet pdf of the runs and whether there was collapse and times

Resolutions Used in Various Jet Papers

Paper Base Grid AMR Physical Size [AU] Jet Radius [AU] Lcool [AU] Cells/AU Cells/Rj Cells/Lcool
Mine (2013) 42 x 420 7 2005 x 20054 334 3 2.68 896 8.04
De Colle (2006) 180 x 1800 0 2005 x 20054 334 3 0.09 30 0.27
Raga (2007) 16 x 64 10 1671 x 6685 134 10 9.80 1314 98.05
Tesileanu (2012) 128 x 384 6 400 x 1200 20 0.2 40.96 819.2 8.19

For Raga (2007), the numbers in the table make sense for a 10 level run. These values are the same as what is actually stated in the paper However, they said that they used 11 levels. So either their simulation was actually 10 levels, and not 11, or someone did the resolution calculations wrong, or there is some detail about their refinement that I am missing.

For Tesileanu (2012), I did not see what their time-dependent velocity was, so I am not sure how strong the internal shocks are. This would determine the cooling length. I will try looking at this more closely to see if I can figure it out from their data.


UPDATE

I changed the above table a bit. Before I was using an Lcool = 4 AU for my calculations. That is still a pretty good benchmark for most of the internal shocks. However, the strongest shock would be at 100 km/s. That is the peak jet velocity of 250 km/s running into the minimum velocity material which is at 150 km/s. I ran these parameters through my 1D code and got Lcool = 3 AU. This is the minimum cooling length that I would expect to see for the internal shocks.

This calculation was done for the hydro case. However, this should be a minimum for all cases (ie MHD with different betas), because magnetic fields would only increase Lcool.

I did the same calculation for the parameters given in Tesileanu (2012), and got Lcool = 0.2 AU. This is mainly due to the increased density. This is for their models which have njet = 104. They also have models at njet = 5 x 104, so these would have an even smaller minimum cooling length.

I also did the calculation using the Raga (2007) parameters. I got Lcool = 1.9 AU. This is because they also use a higher density than in my sims. However, the cooling length they reported and used was 10 AU. They did the calculation for a 40 km/s shock, while I did it for an 80 km/s shock…For a 40 km/s shock, astrobear gets 8.30 AU which is pretty close to their 10 AU.

So my question is this: When you have the sinusoidally dependent injection velocity, is it more appropriate to consider your strongest shocks to be equal to the amplitude of your sinusoidal function, or 2x the amplitude?

Journal Club 1001

Agenda:

  • Testing the implicit diffusion.

  • Erica's research topics.

(I'll be leaving after 4:30pm so let's discuss the diffusion stuff first).

1, Conduction front test with c.u. with uniform rho = 1. (found the simulation is slower than the analytic solution)

2, Conduction front with different densities.

3, Conduction front test with real units (in the paper already).

4, Conduction front test with hydro turned on

5, RT nonablative test with c.u.

Characterizing the Forces on the Clumps in Pulsed Jets II - Isothermal Approx.

As discusses in the meeting today, it would be more accurate to do my calculations in the isothermal limit. So I set gamma = 1.0, and got some interesting results.

Turns out that in this limit, the force is always negative (compression regime). As a reminder, the forces are:

The magnetic field and pressure profiles are as follows:

Where is the beta that I define in my initial conditions. is derived from and . is some intermediate radius, set to 0.6 rjet in my set up. depends on and . These are the profiles for . Outside of the profiles are such that the forces go to zero.

Solving for the net force as is results in zero, but if I apply shock jump conditions first, then I can write

where I have defined as the pressure jump and as the density (or magnetic field) jump. It is important to remember that the jump conditions are radially dependent due to the fact that the pressure is radially dependent. In the adiabatic case, this gets pretty complicated. If I enforce the conditions for an isothermal shock, then it simplifies a bit. To maintain the same temperature across the shock, the pressure jump has to match the density jump, so .

So making the problem isothermal guarantees that the force is always negative; the clump is always in the compression regime. This is consistent with what De Colle and Raga found in their 2008 paper: the non-radiative cases were dominated by thermal pressure forces, and the radiative cases were dominated by magnetic forces.

Even though I made the problem simpler, there are still some interesting things to learn from the plots…

It is easy to make sense of these plots. The force is greater for greater velocities and mach numbers because the magnetic field gets compressed more. More magnetic field means stronger pinching. The other cases of beta follow the same trend, so I will not post them.

If you only looked at the mach number plot, you would naively think that compression is stronger at larger radii. However, the plots follow this trend because mach number increases with radius. At any given moment, the velocity is constant across the shock, so you might think that stronger compression occurs at smaller radii. You would again be wrong. I made a different plot to explore this point.

This plot shows that for a given beta, you will get stronger compression with smaller radii but only to a certain point. Then, the curve turns over and the force starts decreasing. I also noticed in this plot that the different cases for beta were not in order. Strange right? So I made yet another type of plot to check this out.

Indeed, you do not keep getting stronger compression with stronger fields. Could it be that at some point you are "saturated" with magnetic field and the force really stops increasing, or is this a flaw of the isothermal approximation? In reality, the shock always increases the temperature. I played around with gamma in my spreadsheet, and I found that as I increased it to say 1.10, the above plot began to turn the other way.

So yes, for increasing gamma you get weaker compression, but you get compression that always increases with decreasing beta.