Posts for the month of October 2018

Buying storage for archiving/saving data

I. Multiple 8TB external hard drive + USB 3.0 port + (RAID)

pros: 1. relatively cheap and flexible

cons: 1. performance could be horrible 2. not quite reliable

  1. 56TB
7 x Seagate Expansion 8TB Desktop External Hard Drive https://www.amazon.com/Seagate-Expansion-Desktop-External-STEB8000100/dp/B01HAPGEIE/ref=sr_1_2?s=electronics&rps=1&ie=UTF8&qid=1540229650&sr=1-2&keywords=16tb+external+hard+drive&refinements=p_n_feature_two_browse-bin%3A5446816011%2Cp_85%3A2470955011&dpID=41mDnJ8-plL&preST=_SY300_QL70_&dpSrc=srch 7X150=$1050
1 x Sabrent 60W 7-Port USB 3.0 Hub https://www.amazon.com/Sabrent-Charging-Individual-Switches-HB-B7C3/dp/B0797NWDCB/ref=sr_1_8?rps=1&ie=UTF8&qid=1540316819&sr=8-8&keywords=7+port+hub+usb3&refinements=p_85%3A2470955011 $40
Total 56TB or 48TB with RAID redundancy $1090

II. Network-attached Storage (NAS) —QNAP

pros: high performance and stable

cons: cost

  1. 40TB
QNAP TS-431P2-1G-US Diskless System Network Storage https://www.newegg.com/Product/Product.aspx?Item=N82E16822107986&ignorebbr=1 $330
4 x 10TB Seagate SkyHawk Surveillance Hard Drive https://www.newegg.com/Product/Product.aspx?Item=N82E16822178999 4x320=$1280
Total 40TB $1610
  1. 60TB
QNAP TS-669L-US Diskless System High-performance 6-bay NAS Server for SMBs https://www.newegg.com/Product/Product.aspx?Item=9SIA0AJ2U04041 $1000
6 x 10TB Seagate SkyHawk Surveillance Hard Drive https://www.newegg.com/Product/Product.aspx?Item=N82E16822178999 6*320=$1920
Total 60TB $2920
  1. 100TB
QNAP REXP-1000-PRO SAS/SATA/SSD RAID Expansion Enclosure for Turbo NAS https://www.newegg.com/Product/Product.aspx?Item=9SIA0ZX7MN0982 $1250
10 x 10TB Seagate SkyHawk Surveillance Hard Drive https://www.newegg.com/Product/Product.aspx?Item=N82E16822178999 10*320=$3200
Total 100TB $4450
  1. 120TB
QNAP High Performance 12 bay (8+4) NAS/iSCSI IP-SAN. Intel Skylake Core i3-6100 3.7 GHz Dual core, 8GB RAM, 10G-ready https://www.newegg.com/Product/Product.aspx?Item=9SIA25V4S75250 $2000
12 x 10TB Seagate SkyHawk Surveillance Hard Drive https://www.newegg.com/Product/Product.aspx?Item=N82E16822178999 12*320=$3840
Total 120TB $5840

III. Cloud space: Amazon Glacier

pros: 1. charged every month

cons: 1. hard to predict total fees

$48 per TB per year + Other fees(retrieval,request,Data transfer etc)

Updates 10/22

PNe

Temperature

Initial wind

Recombination

Luke helped me to check the equation of states, the gamma factor is correct. we also draw BCs from Luke's CE simulation as initial state for the Rec equation solver. still working on it. this is the old note: https://www.overleaf.com/read/xsrfvtjfcyzs

COMMON ENVELOPE SIMULATIONS

Summary of last two weeks' work

  1. Finished XSEDE proposal with Baowei & Eric
  2. Continued to write/edit energy paper
  3. Continued running simulations
  4. Preliminary analysis of simulations

Energy paper

  • Will be on Overleaf before Oct 31
  • Meeting on Nov 6?
  • en.pdf.

Roche lobe overflow in high resolution

http://www.pas.rochester.edu/~lchamandy/Graphics/RGB/Post-sink_particle/Post-modified_Lane_Emden/p_mult_143_160_161_162.png

Face-on density (zoomed in) (Run 160 a=109Rsun), equal to theoretical Roche limit separation for this system.
Face-on density (zoomed in) (Run 161 a=73.5Rsun)
Face-on density (zoomed in) (Run 162 a=109Rsun with ambient density 10^-10^ g/cm^3^)

The sound-crossing time for the RG is about 8 days or ~35 frames. This is the approximate time the star would take to deform to fill the Roche lobe.

While this deformation is happening, the secondary accretes from the ambient medium.

Runs like fiducial run 143 (Model A of Paper I) but with different secondary mass

http://www.pas.rochester.edu/~lchamandy/Graphics/RGB/Post-sink_particle/Post-modified_Lane_Emden/p_mult_143_149_151.png

Face-on density (zoomed in) (Run 149, m2=0.5 Msun)
Face-on density (zoomed in) (Run 151, m2=0.25 Msun)

Convergence study for softening length and resolution

http://www.pas.rochester.edu/~lchamandy/Graphics/RGB/Post-sink_particle/Post-modified_Lane_Emden/p_mult_143_152_153_154.png

Convergence study for size of maxlevel refinement region

http://www.pas.rochester.edu/~lchamandy/Graphics/RGB/Post-sink_particle/Post-modified_Lane_Emden/p_mult_143_163.png http://www.pas.rochester.edu/~lchamandy/Graphics/RGB/Post-sink_particle/Post-modified_Lane_Emden/p_mult_143_163_zoom.png

Medium resolution AGB run (one less level of AMR than fiducial RGB run)

http://www.pas.rochester.edu/~lchamandy/Graphics/RGB/Post-sink_particle/Post-modified_Lane_Emden/p_mult_143_164.png

http://www.pas.rochester.edu/~lchamandy/Graphics/RGB/Post-sink_particle/Post-modified_Lane_Emden/Run164/rho2D_faceon_frame100_164.png http://www.pas.rochester.edu/~lchamandy/Graphics/RGB/Post-sink_particle/Post-modified_Lane_Emden/Run164/rho2D_faceon_mesh_frame100_164.png
Face-on density (zoomed in) (Run 164, m2=1 Msun)

Update 10/22

Parameter Space Paper

Just a few comments left. Copied from my email:

Places/comments that could still use some attention:

  • General qualitativeness and informality (anything in particular stand out?)
  • Where to discuss the appropriateness of the case B recombination assumption - in methods where we state the assumption, or later in the results/conclusions somewhere?
  • I've added a few citations to John's paper. Further suggestions for places to add a reference, or more discussion (particularly in section 4.2), are welcome.

Response-only:

  • How to respond to single-frequency suggestion (Just say we've considered it?)
  • Square features in streak images from lack of velocity to convolve with noise

In addition to reviewer comments, I've updated the figures with new colorbar ranges that should highlight some of the features more clearly.

HD209458b synthetic observations

Still processing. Should have results for low and medium radiation pressure soon. High radiation pressure run is complete. Average observations over burp, non-burp times? I'll look to see what John did, as well.

HD209458b mass loss rate

High ionizing flux crashed with MPI_WAITANY error, at level 4 after a few time steps. First time this has happened (code is unchanged), so I've requeued it, but I'll also go see if I can find the cause. Unfortunately, the error message doesn't give a location of the call. Fortunately, we only have a few calls to MPI_WAITANY.

AMR line transfer

Have a maybe halfway there commit on new branch AMR_line_transfer. Here's the relevant code:

linetransfer_declarations

Ray type and linked list of rays. Ray may need a few more data structures inside.

  type RayDef
    integer :: sendRequest, recvRequest
    real(kind=qPrec) :: currentIonizingTau, currentIonizingFlux, currentLymanTau, currentLymanFlux
    type(InfoDef), pointer :: info
    integer :: localCoord(nDim), level
    type(RayDef), pointer :: next
  end type

  type RayList
    sequence
    type(RayDef), pointer :: self
    type(RayList), pointer :: next
  end type

linetransfer_control

Truncated code for integration along ray, only change is we don't have to hack q from the layout data any more.

    type(infodef), pointer :: info
    type(RayList) :: rayList
    type(RayDef), pointer :: ray

    !!! Still only want to do the calculation once, after each hydro update is complete
    IF (n < FinestLevel) then 
      linetransfer_iters = 0
      SubcyclingReason = 'N'
      RETURN
    end if

    call StartTimer(iLineTransferTimer,n)

    ! Convert total to internal energy on all levels
    CALL ConvertLevelsTotalToInternalEnergy

    dt=levels(n)%dt
    tnow=levels(n)%tnow
    tfinal=tnow+dt

    linetransfer_iters = 0
    DO WHILE (tnow < tfinal)

      rates = 0d0
      ! Set info to default values
      if (lt_iF /= 0) info%q(:,:,:,iFlux) = 0d0
      if (lt_iD /= 0) info%q(:,:,:,iDepth) = 230d0
      if (lt_iFa /= 0) info%q(:,:,:,iFlux_a) = 0d0
      if (lt_iDa /= 0) info%q(:,:,:,iDepth_a) = 230d0

      !!!!!!! Things to think about with sends/receives:
      !!!! Should tag each ray individually (related to position on left boundary; corrections for split and joined rays?)
      !!!! When crossing child patch, can't continue on parent until we've received the ray back from child
      !!!! How and when can we collect the rays from finer grids?
      !!!! When can we distribute the rays to coarser grids?

      !!! Nodelist is local, right? And if a processor has a parent and child, can it communicate the ray without MPI?
      ray=>rayList%self   ! Head of linked list for this processor
      DO n=0,MaxLevel
        nodelist=>Nodes(n)%p
        DO WHILE (ASSOCIATED(nodelist))
          info=>nodelist%self%info    ! info points to current patch info
          ! Loop over left boundary of current patch
          do j = 1,info%mx(2,2)
            do k = 1,info%mx(3,2)
              if (n == 0 .and. left boundary == xmin) then    ! If we're on domain boundary at root level, set ray to boundary conditions
                ray%level = n
                ray%currentIonizingTau = 0
                ray%currentIonizingFlux = IonizingFlux
                ray%info=>nodelist%self%info    !!! ray%info now points permanently to the correct patch info?
                ray%localCoord = (/1,j,k/)    ! zone index for local patch
              else
                if (info%childmask(0,j,k) == -1) then   ! If we're a finer grid, receive ray from parent and divide
                  call mpi_irecv(receivedRay, 1, mpi_defined_type, parent, linetransfer_tag, mpi_comm_world, recvRequests(parent))   ! Receive ray from parent   !!! Define a ray type for mpi communication? Only need tau and flux; where is processor of parent stored?

                  !!! but this can only actually be done once ray is received... so move it to processing loop, and need to process rays based on origin
                  rayNum = 1
                  ! Divide ray from parent across appropriate cells
                  do while (rayNum <= 4)
                    ray%level = n
                    ray%currentIonizingTau = receivedRay%currentIonizingTau
                    ray%currentIonizingFlux = receivedRay%currentIonizingFlux/4d0
                    ray%info=>nodelist%self%info
                    ray%localCoord = (/1/)    !!! not sure how to set this one yet
                    if (rayNum < 4) rayList=>rayList%next
                    rayNum = rayNum + 1
                  end do
                else if (info%childmask(0,j,k) > 0 .or. info%childmask(0,j,k) == neighborchild) then        ! if we're on a coarser grid not at domain boundary or patch boundary, do nothing   !!! How is neighborchild set? It's not a parameter...
                else
                  call mpi_irecv(ray, 1, mpi_defined_type, neighbor, linetransfer_tag, mpi_comm_world, recvRequests(neighbor))   ! Receive ray from neighbor   !!! Define a ray type for mpi communication? Also probably need an array of requests, rather than storing them in the individual rays, so we don't have to iterate through the list to check them all...
                  ray%level = n
                  ray%info=>nodelist%self%info
                  ray%localCoord = (/1/)    !!! not sure how to set this one yet
                end if
              end if
              rayList=>rayList%next
            end do
          end do
          nodelist=>nodelist%next
        end do
      end do

      !!! This will return to head of list, right?
      ray=>rayList%self
      do while (associated(rayList))
        if (ray%currentIonizingFlux /= IonizingFlux) call mpi_waitany(recvCount, recvRequests, indx, mpi_status_ignore)   !!! wait for new ray if we're not on the left boundary of domain
        info=>ray%info
        do i = 1,info%mx(1,2)
          j = ray%localCoord(2)
          k = ray%localCoord(3)
          if (i == info%mx(1,2)) then   ! If we're at right boundary of current patch
            if (info%childmask(i,j,k) == 0 .or. info%childmask(i,j,k) == neighborchild) then
              call mpi_isend(ray, 1, mpi_defined_type, neighbor, linetransfer_tag, mpi_comm_world, sendRequests(neighbor))    ! Send to neighbor
            else
              !!! Need to sum appropriate rays here - mpi_reduce? Except collectives are blocking, so we'd never get to another ray to do the sum... so need a conditional to test when we're ready to send a ray to parent
              call mpi_isend(ray, 1, mpi_defined_type, parent, linetransfer_tag, mpi_comm_world, sendRequests(parent))    ! Send to parent
            end if
          else if (info%childmask(i,j,k) > 0) then
            call mpi_isend(ray, 1, mpi_defined_type, child, linetransfer_tag, mpi_comm_world, sendRequests(child))      ! Send to child
            do i = i,info%mx(1,2)   !!! I think this will do what I intend... but not certain yet
              if (info%childmask(i,j,k) < 0 .and. .not. neighborchild) then
                call mpi_irecv(ray, 1, mpi_defined_type, child, linetransfer_tag, mpi_comm_world, sendRequests(child))     ! Receive from child
                exit      ! Return to outer loop at new location      !!! Need to wait to receive summed rays from child before we can continue along this ray...
              end if
            end do
          else

            !!! can break this back into subroutine(s)

            ! Goal is to take input flux and integrate equations across grid to find ionization rate and heating rate
            ! Flux has units of number per area per time in computational units
            ! Data has computational units of the density of neutral hydrogen, ionized hydrogen, ionization rate, and heating rate
            q=>info%q(i,j,k,:)
            ! Calculate radiative transfer of ionizing flux
            .....
            ! Calculate radiative transfer of Lyman-alpha flux
            .....
            !!! Could do all of this as array operations instead of zone by zone - probably more efficient?
            ! Adjust heating and cooling rates if they exist
            .....
              ! Recombination
              .....
              ! Recombination cooling
              .....
              ! Lyman alpha cooling
              .....
            ! Charge exchange
            .....
            !Update minimum time step by looking at rate of energy, neutral, and ionized species
            .....
          end if
        end if
      end do
      rayList=>rayList%next
    end do

#if defined _MPI
    CALL MPI_ALLREDUCE(MPI_IN_PLACE, dt, 1, MPI_DOUBLE_PRECISION, MPI_MIN, MPI_COMM_WORLD, iErr)
#endif     

    dt=min(dt,tfinal-tnow)
    IF (dt == 0d0) THEN
      WRITE(*,*) 'dt = ', dt, tnow, tfinal
      STOP
    END IF

    linetransfer_iters=linetransfer_iters + 1
    tnow=tnow+dt

    call mpi_waitall(sendCount, sendRequests, mpi_statuses_ignore)   !!! wait for all sends after we've finished the rays that we needed to receive
        
  END DO

meeting update

increased resolution of supernova core collapse problem… am getting funky boundary issues and poor HSE in center core.

possible fixes:

-integrate outward from center instead of inward from ambient for HSE routine

Update 10/15

Parameter space paper

Spent most of last week working on this. Fixed the issue where the colormap wasn't clipping at the lowest value (interpolation gives NaNs beyond range). Need to figure out the best new range still, but here's a preview:

http://www.pas.rochester.edu/~adebrech/PlanetIonization/Run1_rot_rxt.png

Also made a number of edits, and a few places still to work on. Would like to have it resubmitted by end of next week (proposals are all due this week, I believe?).

HD209458b

Synthetic observations aren't promising here. We're using up to about half the flux reported in the literature, which would probably up the mass loss rate by a factor of ~1.9 (to about 1010, still a little on the low side).

http://www.pas.rochester.edu/~adebrech/HD209458b/HD209458b_transit.png

http://www.pas.rochester.edu/~adebrech/HD209458b/planet-corrected_transit.png

http://www.pas.rochester.edu/~adebrech/HD209458b/HD209458b_attenuation_smaller.png

Radiation Pressure

High Flux

We get the same sort of burping John sees in his medium-wind case.

http://www.pas.rochester.edu/~adebrech/HD209458b/HD209458b_high_flux0083.png

Med Flux (Actual HD209458b Lyman-alpha Flux)

Very subtle widening of the arms, due to some material being pushed radially outward.

http://www.pas.rochester.edu/~adebrech/HD209458b/HD209458b_med_flux0106.png

Low Flux

Even more subtle widening, very similar to medium flux.

http://www.pas.rochester.edu/~adebrech/HD209458b/HD209458b_low_flux0103.png

Medium and low flux appear to be pretty much steady-state. Probably don't need to get more frames - doesn't look like there'll be much change to average out. All three still queued on Stampede, but we're pretty low on time now. Satisfactory to make synthetic observations?

PNe Movies

Directory for all the movies

Table 1 High momentum wind Low momentum wind
rho_FW (g/cc) 1e-11 5e-13
v_RW (km/s) 300 300
quiescent phase (days) 6000 6000
T_FW (K) 30000 30000
  • Table 2: links to individual movie
Table 2 High momentum wind Run-02 7465 days Low momentum wind Run-01 9100 days
rho http://www.pas.rochester.edu/~yzou5/PNe_simulation_movies/High_momentum_run02_D7465_rho.gif http://www.pas.rochester.edu/~yzou5/PNe_simulation_movies/Low_momentum_run01_D9100_rho.gif
Temp http://www.pas.rochester.edu/~yzou5/PNe_simulation_movies/High_momentum_run02_D7465_T.gif http://www.pas.rochester.edu/~yzou5/PNe_simulation_movies/Low_momentum_run01_D9100_T.gif
  • Note: Interpolations in Run-02 are set to 1, and 0 in Run-01

Updates 10/08

PNe

  • Will update links to output once they are available

High momentum wind

  • Run-02: ran to 1465 days post quiescent phase

Rarefaction issue

  • Run-02: a snapshot from high-momentum Run-02 with interpolations = 1. https://astrobear.pas.rochester.edu/trac/attachment/blog/amyzou10012018/high-run002.PNG
  • Notice the rarefaction on the edge of a square "box" with side = 8 (unit = 1000 R_sun). It's also seen in the previous simulation by Zhuo (see low-momentum case in the pdf)

Low momentum wind

  • Reran to 2600 days post quiescent phase

Recombination

Update 10/8

Radiation Pressure

Runs on Stampede2:

Running on 128 nodes (6144 cores), we currently (frame 211) have an efficiency of 13% hydro, 38% line transfer (including layout communications), 52% total. Estimated completion 2.9 days. This is down from 23%, 54%, 76% at frame 203 with estimated completion in 1.5 days, where the current run started. Reduced estimated wall time by a factor of ~10. Produced 8 frames in the last 6 hours.

Max speeds are only on the order of 10s, and haven't changed significantly in current run (nor has dt). Here are some comparisons of the mesh:

Frame 180 (pre-radiation pressure):

http://www.pas.rochester.edu/~adebrech/visit0020.png

http://www.pas.rochester.edu/~adebrech/visit0022.png

Frame 203:

http://www.pas.rochester.edu/~adebrech/visit0021.png

http://www.pas.rochester.edu/~adebrech/visit0023.png

Frame 211:

http://www.pas.rochester.edu/~adebrech/visit0025.png

http://www.pas.rochester.edu/~adebrech/visit0024.png

So it's definitely related to increased resolution due to disturbance by the radiation pressure, which should settle down eventually.

Pseudocode for AMR line transfer:

Start line transfer
if (processor at -x boundary)
  calculate radiative transfer (to child boundaries?)    ! only actually needs to be calculated on finest grid
  if (hasChildren)
    prolongate at boundaries and send to child           ! somehow need to be traveling along the ray while we do this, so that we encounter children in order (so that prolongation and restriction are correct)
  end if
  send ray to neighbor
else
  receive from neighbor
  calculate radiative transfer
  if (hasChildren)
    prolongate at boundaries
  end if
  if (not at +x boundary)
    send to neighbor
  end if
end if

Related: Have we ever thought about using the MPI distributed graph routines to keep track of neighbors?

HD209458b

Discovered that there's something incorrect about the stellar size output by the code (because it's still set by the stellar lambda, since it's not needed on the grid). Rerunning postprocessing on BlueHive, should have some new results tomorrow morning.

meeting notes

Am trying to interpolate the following 1D radial profiles onto Astrobear's grid in 3D octant symmetry:

progenitor profiles

These profiles represent a ~5 solar mass progenitor for a core collapse supernova. The red vertical line is the radial cut-off I am taking for initialization onto the grid. Beyond this radius am switching the profiles to constant ambient values.

These profiles are assumed to be in ~HSE. Thus, I used the HSE module in AstroBEAR for self-gravitating gas to produce the following grid-interpolated profiles:

Neutrino cooling is being mocked up in the code using the empirical formula:

For these initial conditions, this works out to be ~ erg/s for the peak density. In computational units, this is .

The initial internal energy of the central zone is,

So if the cooling rate were constant over , would expect to see the internal energy drop to . Here is a plot of the internal energy over dt:

iE

In other words, if the cooling time is , then within this timestep, .

The HSE module produces this:

HSE

Pretty close, except at origin…

grav vs. press, initial frame

After freefall time (or 38 cooling times):

rho decrease in 1 ff time

The mesh shows strange behavior on the reflecting sides

boundaries awry

And no sink particles being used,

COMMON ENVELOPE SIMULATIONS

Summary of last two weeks' work

  1. Running simulations:
    1. As Model A of Paper 1 but with
      1. ½ secondary mass (mostly complete) [Run 149]
      2. ¼ secondary mass (half completed) [Run 151]
    2. RLOF tests (partial runs, complete, need to analyze output)
      1. Separation of 109 Rsun, equal to Roche limit separation [Run 160]
      2. Separation of 73.5 Rsun, equal to 1.5 times the separation of Model A of Paper 1 [Run 161]
      3. Separation of 109 Rsun, with ambient density 10-10 g/cc instead of 6.7x10-9 g/cc [Run 162]
    3. Convergence tests for resolution and softening length (3 partial runs, each about half completed) [Runs 152, 153, 154]
    4. Convergence test for max refinement volume (full run, half completed) [Run 163]
    5. AGB test run (Pending) [Run 164]
      • Box is the same as Model A of Paper 1
      • AGB has radius 122 Rsun (instead of 48 Rsun for RGB)
      • Primary mass 1.8 Msun (instead of 2.0 Msun)
      • Primary core mass 0.53 Msun (instead of 0.37 Msun)
      • Secondary mass 1.0 Msun identical to Model A of Paper 1
      • Initial separation 124 Rsun
      • Refined out to radius 1e13 cm= 144 Rsun (instead of 5e13 cm= 72 Rsun)
      • Reduced the resolution to maxlevel=3 (from maxlevel=4)
      • Chose ambient pressure P_amb=104 dyne/cm2—-expect to resolve scale height at surface
      • Chose ambient density rho_amb=10-9 g/cm3—-compare to surface density of 4x10-9 g/cm3
  2. Working on Xsede proposal:
    1. Decided that the main objective should be to run a simulation with an AGB star
      1. 8 times bigger box but degrade ambient resolution by the same factor
      2. Might have to refine only core of AGB and surface shell of AGB at maxlevel
      3. Hydrostatic atmosphere that matches to a uniform pressure-uniform density atmosphere at a certain radius, to avoid larger ambient density/pressure
    2. Could explore dependence on secondary mass again
    3. And/or could try 3-body problem
    4. Use about ¼ of time to do convergence tests
  3. Plan for papers (not including work on recombination and dust led by Amy):
    1. Energy/envelope unbinding paper (November)
    2. Jets (~6 months)
    3. Dependence on secondary mass (~9 months)
      1. Energy and envelope unbinding
      2. Drag force
      3. Convergence tests
    4. Simulation with AGB star (~12-15 months)
      1. Energy and envelope unbinding
      2. Drag force
      3. Comparison with RGB simulations
      4. Possibly dependence on secondary mass (if resources permit)
    5. Three-body problem (if resources permit)
    6. RLOF or RLOF+CEE with RGB/AGB (let's see)

Ivanova+2013 equation 3:

New equation suggested by us:

Run 143 (Model A of Paper I) has initial separation . The Roche limit radius for as in our case is .

Run 164 (AGB) has initial separation . The Roche limit radius for the same secondary mass as in the RGB run is .

Then at what value of should the envelope be completely unbound, for a given value of ?

RGB (lambda=1.3)
Ivanova+2013 with
Ivanova+2013 with
New equation with
New equation with
AGB (lambda=0.9)
Ivanova+2013 with
Ivanova+2013 with
New equation with
New equation with

So the final separation needed for envelope removal is almost 4 times larger for the AGB!

Note also that the final separation is roughly given by the following asymptotic formula ():

,

where . This formula also says that should go up even more if we make larger.

But it might take a lot longer to get there than for the RGB case. We need to estimate how long it would take in simulation days from the preliminary test run 164 and also from analytic estimates (I will try to do this today). We will use this estimate to set what computing resources we ask for.

Updates 10/01

PNe

Rerunning high-momentum wind with interpolations = 1 (Run-02)

Quiescent phase (6000 days from start) is done, continue to see if fast wind phase produces the same rarefactions between cell boundary. Zhuo also suggested removing point mass at center.

Here is a comparison between snapshots from the conference proceeding and those reproduced using Run-01 data.
http://www.pas.rochester.edu/~yzou5/PNe_compare-snapshot.pdf

  • a snapshot from high-momentum Run-02 with interpolations = 1. Notice the rarefaction on the edge of a square "box" with side = 8 (unit = 1000 R_sun). It's also seen in the previous simulation by Zhuo (see low-momentum case in the pdf)https://astrobear.pas.rochester.edu/trac/attachment/blog/amyzou10012018/high-run002.PNG

Recombination

https://www.overleaf.com/read/xsrfvtjfcyzs

Update 10/1

Radiation Pressure

Stampede is working now, even without increasing request descriptors, because it's running much more slowly, as Baowei was finding (~1 month to get to 300 frames). I think it's time to tackle AMR line transfer.

Also, we could theoretically run on BlueHive. Based on the new speed on Stampede, though, it would take months to complete a run.

HD209458b

Some very interesting results from the synthetic observations. Very little variation over time, as expected from looking at the simulation.

Here's the highlight:

http://www.pas.rochester.edu/~adebrech/HD209458b/HD209458b_obs.eps

We see significant symmetric absorption in the blue and red wings during transit (but not much before or after). Perhaps worth investigating a finer time series of these.