Update 10/22

Parameter Space Paper

Just a few comments left. Copied from my email:

Places/comments that could still use some attention:

  • General qualitativeness and informality (anything in particular stand out?)
  • Where to discuss the appropriateness of the case B recombination assumption - in methods where we state the assumption, or later in the results/conclusions somewhere?
  • I've added a few citations to John's paper. Further suggestions for places to add a reference, or more discussion (particularly in section 4.2), are welcome.

Response-only:

  • How to respond to single-frequency suggestion (Just say we've considered it?)
  • Square features in streak images from lack of velocity to convolve with noise

In addition to reviewer comments, I've updated the figures with new colorbar ranges that should highlight some of the features more clearly.

HD209458b synthetic observations

Still processing. Should have results for low and medium radiation pressure soon. High radiation pressure run is complete. Average observations over burp, non-burp times? I'll look to see what John did, as well.

HD209458b mass loss rate

High ionizing flux crashed with MPI_WAITANY error, at level 4 after a few time steps. First time this has happened (code is unchanged), so I've requeued it, but I'll also go see if I can find the cause. Unfortunately, the error message doesn't give a location of the call. Fortunately, we only have a few calls to MPI_WAITANY.

AMR line transfer

Have a maybe halfway there commit on new branch AMR_line_transfer. Here's the relevant code:

linetransfer_declarations

Ray type and linked list of rays. Ray may need a few more data structures inside.

  type RayDef
    integer :: sendRequest, recvRequest
    real(kind=qPrec) :: currentIonizingTau, currentIonizingFlux, currentLymanTau, currentLymanFlux
    type(InfoDef), pointer :: info
    integer :: localCoord(nDim), level
    type(RayDef), pointer :: next
  end type

  type RayList
    sequence
    type(RayDef), pointer :: self
    type(RayList), pointer :: next
  end type

linetransfer_control

Truncated code for integration along ray, only change is we don't have to hack q from the layout data any more.

    type(infodef), pointer :: info
    type(RayList) :: rayList
    type(RayDef), pointer :: ray

    !!! Still only want to do the calculation once, after each hydro update is complete
    IF (n < FinestLevel) then 
      linetransfer_iters = 0
      SubcyclingReason = 'N'
      RETURN
    end if

    call StartTimer(iLineTransferTimer,n)

    ! Convert total to internal energy on all levels
    CALL ConvertLevelsTotalToInternalEnergy

    dt=levels(n)%dt
    tnow=levels(n)%tnow
    tfinal=tnow+dt

    linetransfer_iters = 0
    DO WHILE (tnow < tfinal)

      rates = 0d0
      ! Set info to default values
      if (lt_iF /= 0) info%q(:,:,:,iFlux) = 0d0
      if (lt_iD /= 0) info%q(:,:,:,iDepth) = 230d0
      if (lt_iFa /= 0) info%q(:,:,:,iFlux_a) = 0d0
      if (lt_iDa /= 0) info%q(:,:,:,iDepth_a) = 230d0

      !!!!!!! Things to think about with sends/receives:
      !!!! Should tag each ray individually (related to position on left boundary; corrections for split and joined rays?)
      !!!! When crossing child patch, can't continue on parent until we've received the ray back from child
      !!!! How and when can we collect the rays from finer grids?
      !!!! When can we distribute the rays to coarser grids?

      !!! Nodelist is local, right? And if a processor has a parent and child, can it communicate the ray without MPI?
      ray=>rayList%self   ! Head of linked list for this processor
      DO n=0,MaxLevel
        nodelist=>Nodes(n)%p
        DO WHILE (ASSOCIATED(nodelist))
          info=>nodelist%self%info    ! info points to current patch info
          ! Loop over left boundary of current patch
          do j = 1,info%mx(2,2)
            do k = 1,info%mx(3,2)
              if (n == 0 .and. left boundary == xmin) then    ! If we're on domain boundary at root level, set ray to boundary conditions
                ray%level = n
                ray%currentIonizingTau = 0
                ray%currentIonizingFlux = IonizingFlux
                ray%info=>nodelist%self%info    !!! ray%info now points permanently to the correct patch info?
                ray%localCoord = (/1,j,k/)    ! zone index for local patch
              else
                if (info%childmask(0,j,k) == -1) then   ! If we're a finer grid, receive ray from parent and divide
                  call mpi_irecv(receivedRay, 1, mpi_defined_type, parent, linetransfer_tag, mpi_comm_world, recvRequests(parent))   ! Receive ray from parent   !!! Define a ray type for mpi communication? Only need tau and flux; where is processor of parent stored?

                  !!! but this can only actually be done once ray is received... so move it to processing loop, and need to process rays based on origin
                  rayNum = 1
                  ! Divide ray from parent across appropriate cells
                  do while (rayNum <= 4)
                    ray%level = n
                    ray%currentIonizingTau = receivedRay%currentIonizingTau
                    ray%currentIonizingFlux = receivedRay%currentIonizingFlux/4d0
                    ray%info=>nodelist%self%info
                    ray%localCoord = (/1/)    !!! not sure how to set this one yet
                    if (rayNum < 4) rayList=>rayList%next
                    rayNum = rayNum + 1
                  end do
                else if (info%childmask(0,j,k) > 0 .or. info%childmask(0,j,k) == neighborchild) then        ! if we're on a coarser grid not at domain boundary or patch boundary, do nothing   !!! How is neighborchild set? It's not a parameter...
                else
                  call mpi_irecv(ray, 1, mpi_defined_type, neighbor, linetransfer_tag, mpi_comm_world, recvRequests(neighbor))   ! Receive ray from neighbor   !!! Define a ray type for mpi communication? Also probably need an array of requests, rather than storing them in the individual rays, so we don't have to iterate through the list to check them all...
                  ray%level = n
                  ray%info=>nodelist%self%info
                  ray%localCoord = (/1/)    !!! not sure how to set this one yet
                end if
              end if
              rayList=>rayList%next
            end do
          end do
          nodelist=>nodelist%next
        end do
      end do

      !!! This will return to head of list, right?
      ray=>rayList%self
      do while (associated(rayList))
        if (ray%currentIonizingFlux /= IonizingFlux) call mpi_waitany(recvCount, recvRequests, indx, mpi_status_ignore)   !!! wait for new ray if we're not on the left boundary of domain
        info=>ray%info
        do i = 1,info%mx(1,2)
          j = ray%localCoord(2)
          k = ray%localCoord(3)
          if (i == info%mx(1,2)) then   ! If we're at right boundary of current patch
            if (info%childmask(i,j,k) == 0 .or. info%childmask(i,j,k) == neighborchild) then
              call mpi_isend(ray, 1, mpi_defined_type, neighbor, linetransfer_tag, mpi_comm_world, sendRequests(neighbor))    ! Send to neighbor
            else
              !!! Need to sum appropriate rays here - mpi_reduce? Except collectives are blocking, so we'd never get to another ray to do the sum... so need a conditional to test when we're ready to send a ray to parent
              call mpi_isend(ray, 1, mpi_defined_type, parent, linetransfer_tag, mpi_comm_world, sendRequests(parent))    ! Send to parent
            end if
          else if (info%childmask(i,j,k) > 0) then
            call mpi_isend(ray, 1, mpi_defined_type, child, linetransfer_tag, mpi_comm_world, sendRequests(child))      ! Send to child
            do i = i,info%mx(1,2)   !!! I think this will do what I intend... but not certain yet
              if (info%childmask(i,j,k) < 0 .and. .not. neighborchild) then
                call mpi_irecv(ray, 1, mpi_defined_type, child, linetransfer_tag, mpi_comm_world, sendRequests(child))     ! Receive from child
                exit      ! Return to outer loop at new location      !!! Need to wait to receive summed rays from child before we can continue along this ray...
              end if
            end do
          else

            !!! can break this back into subroutine(s)

            ! Goal is to take input flux and integrate equations across grid to find ionization rate and heating rate
            ! Flux has units of number per area per time in computational units
            ! Data has computational units of the density of neutral hydrogen, ionized hydrogen, ionization rate, and heating rate
            q=>info%q(i,j,k,:)
            ! Calculate radiative transfer of ionizing flux
            .....
            ! Calculate radiative transfer of Lyman-alpha flux
            .....
            !!! Could do all of this as array operations instead of zone by zone - probably more efficient?
            ! Adjust heating and cooling rates if they exist
            .....
              ! Recombination
              .....
              ! Recombination cooling
              .....
              ! Lyman alpha cooling
              .....
            ! Charge exchange
            .....
            !Update minimum time step by looking at rate of energy, neutral, and ionized species
            .....
          end if
        end if
      end do
      rayList=>rayList%next
    end do

#if defined _MPI
    CALL MPI_ALLREDUCE(MPI_IN_PLACE, dt, 1, MPI_DOUBLE_PRECISION, MPI_MIN, MPI_COMM_WORLD, iErr)
#endif     

    dt=min(dt,tfinal-tnow)
    IF (dt == 0d0) THEN
      WRITE(*,*) 'dt = ', dt, tnow, tfinal
      STOP
    END IF

    linetransfer_iters=linetransfer_iters + 1
    tnow=tnow+dt

    call mpi_waitall(sendCount, sendRequests, mpi_statuses_ignore)   !!! wait for all sends after we've finished the rays that we needed to receive
        
  END DO

Comments

No comments.