Version 11 (modified by 10 years ago) ( diff ) | ,
---|
Camera Objects
Camera Objects can be created within the ProblemModuleInit routine in problem.f90. To create Cameras you first need to add two USE
statements to your problem.f90
USE Cameras USE Fields
Then in ProblemModuleInit declare a variable pointer of type CameraDef
TYPE(CameraDef), POINTER :: Camera
Then create the Camera and set the various parameters as in the following example
CALL CreateCamera(Camera) Camera%pos = (/4d0,-10d0,4d0/) Camera%UpVector = (/0d0,0d0,1d0/) Camera%Focus = (/4d0,4d0,4d0/) Camera%FOV = (/30d0,3d0/)
- Camera's are currently only used by the Projection Object
Here is a full list of the various Camera parameters with the default values in brackets:
REAL(KIND=qPREC), DIMENSION(3) :: pos = DEFAULTCAMERAPOS ! Will choose a point at a distance in -y to see the entire domain. REAL(KIND=qPREC), DIMENSION(3) :: UpVector=(/0d0,0d0,1d0/) REAL(KIND=qPREC), DIMENSION(3) :: Focus = CENTER !Will select the center of the simulation domain REAL(KIND=qPREC) :: FOV= (/30d0, 30d0/)
- Given the camera position and the focus , we can construct the camera normal vector
- We can then construct the true up vector by making the up vector orthogonal to the camera vector.
and then normalizing
- We can then can get the horizontal vector by crossing the camera vector with the up vector
- Then any given spatial point can be mapped to the image plane by projecting the vector onto the and vectors.
- The volume of a given cell may all be within one pixel - in which the contribution to that pixel is just the amount of material in the cell… But when the cell gets large enough so that it's volume maps to more than one pixel - things get complicated. At that point it becomes better to sample which pixels pass through the cell instead - and calculate the optical depth through the cell… A given cell will have a projected angular size of ~ . Each pixel will subtend an angle of approximately . So when , the method should switch. If we sub sample coarse grids to the same resolution, and we have a min camera distance d', then we can choose a resolution for the projection … Alternatively we can choose the sub-sampling rate so that
- To avoid information loss, the resolution of the image should be
Image 1. A rotating "hockey-puck" region of the CollidingFlows problem with cameras.
Clearly there are still some aliasing effects. The routine that bins a cell into an image pixel is given below:
SUBROUTINE BinCell(Camera, pos, dx, data, rho) TYPE(CameraDef), POINTER :: Camera REAL(KIND=qPREC) :: pos(3),xpos(3) REAL(KIND=qPREC) :: dx, ddx, xlower(3) REAL(KIND=qPREC), DIMENSION(:,:) :: data REAL(KIND=qPREC) :: rho INTEGER :: ipos(2), sample_res, i, j, k xlower=pos-half*dx sample_res=2d0*ceiling(camera%FOV(1)/camera%res / (dx/sqrt(sum(pos-Camera%pos)**2))) ddx=dx/REAL(sample_res) xlower=xlower-half*ddx DO i=1,sample_res xpos(1)=xlower(1)+ddx*i DO j=1,sample_res xpos(2)=xlower(2)+ddx*j DO k=1,sample_res xpos(3)=xlower(3)+ddx*k ipos(1:2)=nint(GetPos(Camera, xpos)*shape(data)) IF (ALL(ipos(1:2) >= 1) .AND. ALL(ipos(1:2) <= shape(data))) THEN data(ipos(1),ipos(2))=data(ipos(1),ipos(2))+ rho*ddx**3 END IF END DO END DO END DO END SUBROUTINE BinCell
We could in principal continue to raise the sample_res but it gets computationally expensive… A better approach would be to sample along each ray at some interval… Then the sampling is pixel based instead of volume based - and should reduce the aliasing… Or we could calculate the optical depth across each cell for each ray that intersects the cell. Given that our mesh is cartesian this is not as hard as it might sound.
- Find what rays could intersect the cell
- For each ray find what which two faces the ray intersects and the corresponding points of entry and exit
- The contribution to the pixel will be the the value of the cell times the distance between those points.
SUBROUTINE BinCell(Camera, pos, dx, data, rho) TYPE(CameraDef), POINTER :: Camera REAL(KIND=qPREC) :: pos(3),xpos(3) REAL(KIND=qPREC) :: dx, ddx, xbounds(3,2), my_pixel(2) REAL(KIND=qPREC), DIMENSION(:,:) :: data REAL(KIND=qPREC) :: rho,a,intersection(6,3), ray(3), max_distance INTEGER :: ipos(2), sample_res, i, j, k, npoints, min_pixels(2), max_pixels(2), pixel(2), dim, odim(2), edge xbounds(:,1)=(pos-half*dx) xbounds(:,2)=(pos+half*dx) min_pixels=huge(min_pixels(1)) max_pixels=0 DO i=1,2 DO j=1,2 DO k=1,2 my_pixel=GetPixel(Camera, (/xbounds(1,i), xbounds(2,j), xbounds(3,k)/))*shape(data)+half min_pixels=max(1,min(min_pixels, floor(my_pixel))) max_pixels=min(shape(data),max(max_pixels, ceiling(my_pixel))) END DO END DO END DO DO i=min_pixels(1), max_pixels(1) DO j=min_pixels(2), max_pixels(2) pixel=(/i,j/)-half Ray=GetRay(Camera, REAL(pixel,KIND=qPREC)/REAL(shape(data), KIND=qPREC)) npoints=0 DO dim=1,3 DO edge=1,2 ! Camera%pos(dim)+a*ray(dim)=xbounds(dim,edge) a=(xbounds(dim,edge)-Camera%pos(dim))/ray(dim) xpos=Camera%pos+a*ray odim=modulo((/dim,dim+1/),3)+1 IF (ALL(xpos(odim) >= xbounds(odim,1) .AND. xpos(odim) <= xbounds(odim,2))) THEN npoints=npoints+1 intersection(npoints,:)=xpos END IF END DO END DO IF (npoints == 0) CYCLE max_distance=0d0 DO k=1,npoints max_distance=max(max_distance, sqrt(sum((intersection(k,:)-intersection(1,:))**2))) END DO data(i,j)=data(i,j)+rho*max_distance END DO END DO END SUBROUTINE BinCell
An Example: Clump in an ambient
In this section we will introduce an example "fly-through" or how to make a visualization using CameraObjects. All associated .data files, along with the problem.f90 for this problem are attached below. This example involves a clump in an ambient as you can see in the movie below (click the movie link):
Steps
Making the code
- ssh or login to one of the local machines. Download the code and "checkout" development in the directory space where you would like to carry out this problem. Do this with the following commands:
git clone ssh://orda@botwin.pas.rochester.edu/astrobear
You should now have a copy of astrobear, cd into the directory and execute:
git checkout development
and now your code is up-to-date. Make sure you do this with the newest version of the code. From there cd into modules and create a new directory for your problem. In that directory scp the attached .data and problem.f90 for this example. From there sym link your problem directory to "Problem" and the Makefile.inc for the machine you are going to "make" your executable (i.e. astrobear code) on. Then in your astrobear directory, make your code. It should create an executable without any errors.
- Now move your executable into your run directory, along with your .data files. Make an out directory and run your code on the machine you've made your executable on with
mpiexec -np <number_of_processors> astrobear > astrobear.log &
One can follow the astrobear.log file as your code spits out output with
tail -f astrobear.log
For more explanation on how to get the astrobear code, make it and run a simulation, please visit the AstroBEAR UserGuide.
Visualizing output
- Now when you cd into your out directory you should see two types of output, chombo*.hdf and Mass_along_3_*.bov. One can visualize the chombo files by slicing the 3D box with 50% along whatever axis to ensure that you have made a clump and ambient. However to observe the camera effects we will be visualizing the bov files.
- Import the bov files into VisIt, and plot the pseudocolor → mass.
- In order to properly see the ambient (which fills the parameters of the box we've created our simulation in), one might want to add a darker color, say black, like we have in Images 2 and 3. One can do this by double clicking the editting the "hot color table" and adding a 6th color. Ensure that you fix the color bar for your simulation as well.
- Make your movie and presto!
Making it your own
- Say you don't like the position and types of cameras you are using and want to get a different view of this clump. One can change this by editing the global.data and problem.data files. Note Image 4.
- There are two sections to your problem.data file: ProblemData and CameraData (however you'll have multiple cameras for the number you set ncameras equal to (i.e. the number of cameras).
- The ProblemData section defines the parameters of the clump and ambient, number of cameras and hence CameraDatas, as well as the level of amr used for the simulation (see Image 3), in this case is 3.
- The CameraData section(s) are where one defines the parameters for each individual cameras which are defined above (camera position, focus, upvector, and the time at which the simulation reaches the camera).
- Ensure that your last camera has the same time as the final time of your simulation (see your global.data) so that you do not miss any cameras.
- You may want to evenly split up the camera times. You could also write a script that spits out the CameraData attributes for you (see above).
- Add as many cameras & CameraDatas as you like, however make sure the number of CameraDatas equals the number you have set ncameras equal to.
Problem for the reader
Attachments (11)
- rotating.gif (4.1 MB ) - added by 13 years ago.
- CubeRotate.gif (12.9 MB ) - added by 13 years ago.
-
problem.f90
(3.5 KB
) - added by 10 years ago.
problem.f90 for clump example
-
physics.data
(9.6 KB
) - added by 10 years ago.
physics.data for clump example
-
global.data
(5.2 KB
) - added by 10 years ago.
global.data for clump example
-
problem.data
(324 bytes
) - added by 10 years ago.
problem.data for clump example
-
solver.data
(1.4 KB
) - added by 10 years ago.
solver.data for clump example
-
mesh0000.png
(51.6 KB
) - added by 10 years ago.
mesh illustration for clump example
-
nowind_justclump1_0000.png
(42.0 KB
) - added by 10 years ago.
png for clump example
-
nowind_justclump1.gif
(5.8 MB
) - added by 10 years ago.
movie of clump example
-
problem.data.png
(47.5 KB
) - added by 10 years ago.
png of problem.data file