Completed mass query for theta = 0 case
Figure. A mass versus time plot for the four sinks that form in the case of CollidingFlows, along with M_Simulation (the total mass in the simulation box), and M_Collision (the total mass in the collision region within the box).
- Time interval is for ~13 Myr.
- Mass axes is logarithmic in order to better illustrate the trend. A linear axes lets the sink masses look close to zero in comparison to the mass in the box and cylindrical collision region.
Followed a similar procedure as to what we did for the E vs. t line plots, however did a query for rho instead.
I did the query using both VNC and the interactive queue on both Erica's and my own account on BH2. For the first 200 frames of the M_simulation query, Clover collected most of the data based on the chombos stored locally. Surprisingly, Clover was faster than both VNC and the interactive queue. Ultimately both of these types of remote visualization settings are really unreliable and result in a lot of babysitting. It took me about 3 days to collect all the data for the box and cylindrical collision region in Visit. Here are some issues & interesting things I encountered:
- Want to use a GPU? Need to use VNC which has a GUI. GUIs are super flaky and prone to time out errors it seems. Here is the command to run visit with a GPU though:
module load virtualgl visit/2.7.3 && vglrun visit
- Attached is the script I used for use VNC. CIRC Website also has some stuff on remote visualization (using VNC). It might be faster and nicer to use on data that isn't too demanding on memory.
- Wanted to query using the interactive -p standard for say -t 300. Whenever I tried to do this, after Visit collected data for a few frames, it would time out. Seems like there were some memory issues. So I just stuck to an hour in the debug queue and monitored the memory/cpu percentage on the node I was on. Here is a website explaining how to do that. This implies I can only query for approximately 10 frames per interactive job.
- Apparently using query on data that utilizes the cylindrical clip operator requires more memory than just query for the total mass in the box. Visit is probably doing extra work. Just an FYI.
Jonathan suggested making a post processing element in astrobear that'll just spit out all the data into a curve file during a batch job. Think if we want these for the three other runs I will just do that…
Moral of the story: Using query in visit for large data sets is finicky, be careful!
VNC Script
Notes:
- Sign into your machine/local machine.
emacs vnc_connect_linux.sh -nw
- Paste script in.
- Make sure it is executable and run it.
- It should prompt you to sign into BH2, and for how long you want your session to be, along with the size of the GUI, etc.
- Hit enter when it says a password is found. (FYI) You'll have to make an extra password for your account too when it establishes the tunnel.
#!/bin/bash -i via=bluehive2.circ.rochester.edu #TurboVNCDir="/opt/TurboVNC/" #vncviewer=$TurboVNCDir/bin/vncviewer vncviewer=vncviewer read -p "Please enter your netid: " user read -p "Do you need to start a VNC server? [y]:" vnc_start read -p "Set a timeout for your VNC server [60]:" vnc_timeout read -p "Choose a resolution [1280x1024]:" vnc_resolution if [[ -z "$vnc_timeout" ]]; then vnc_timeout=60 fi if [[ -z "$vnc_resolution" ]]; then vnc_resolution="1280x1024" fi if [[ $vnc_start =~ ^[Yy]$ ]] || [[ -z "$vnc_start" ]]; then echo echo "Now connecting to bluehive and starting the " echo "VNC server." ssh $user@$via "vnc_start -t $vnc_timeout -g $vnc_resolution" # | grep "vncserver running on " | cut -d " " -f 4 fi read -p "Please enter server (ie bhx0101:1) " server host=`echo $server | awk -F ':' '{print $1}'` display=`echo $server | awk -F ':' '{print $2}'` port=$(expr 5900 + $display) echo "Establishing ssh tunnel" TMPSOCK=`mktemp -u XXXXXX.control` ssh -fN -o ExitOnForwardFailure=yes -M -S /tmp/$TMPSOCK -L $port:$host:$port $user@$via echo "Launching VNC viewer" $vncviewer localhost:$port echo "Disconnecting ssh tunnel" ssh -S /tmp/$TMPSOCK -O exit $user@$via
Core mass script documentation
Toward the end of last week I worked on writing a script that would open, read and write the mass data in the sinks_*.okc files to a new .csv file (In particular for the CollidingFlows problem, see our data here: CollidingFlowsFigures). The purpose of this was to gain all of the sink data over time, so that we could visualize it, as you can see by the results I have posted below. These charts will allow us to observe how the sinks accumulate mass over the time of the simulation. Here I will document the development of my code, and discuss what else I would like to do with it.
Objectives
Editing code:
- Write information of the density/mass of the cores to a .csv file
- Convert the information from the .csv into solar masses and Myr.
- Take converted .csv file and make graphs using matplotlib, or excel. Excel is quick, but a code that can do all of this and generate plots in one swoop would be super efficient for later uses.
- Document and post concise and general form under the VisIt page as a script tab.
So far I have completed the first bullet~ (02-09-2015)
Editing charts:
- Convert mass to solar masses.
- Convert time to Myr.
- Crop the x-axes to when the first sink forms. Get rid of the 0 values.
- Fix x and y-axes to be the same for all the runs.
okc_to_csv.py v. 1
Screen capture of code | Screen capture of .okc file |
The code reads the first line of the .okc file, splits the numbers into a list of ints. It then uses those numbers in order to access the data below that starts at the 34th line. The only parts that are hard coded are the headers for the columns of the csv file (L17) and the number of lines of data it has to read into the .csv (L27). Essentially you change this by the number of sinks that form by the end of the simulation. You can check this by counting the number of lines of data that are spit out by the final frame of the simulation.
Results 02-09-2015
Shear 0 | |
Shear 15 | |
Shear 30 | |
Shear 60 |
Solution to streamlines issues.
Turns out we are calling the wrong position in the array. The expressions should be:
By_downx = array_decompose(projections,0)
Bz_downx = array_decompose(projections,1)
Bz_downy = array_decompose(projections,0)
Bx_downy = array_decompose(projections,1)
Byz_downx = {<By_downx>, <Bz_downx>}
Bzx_downy = {<Bz_downy>, <Bx_downy>}
This yields the following:
Which makes much more sense.
Potential problems with projected streamlines?
So I am attempting to plot the streamlines for our colliding flows problem. Here is an example of the shear-0 case at 10.1 Myr (or frame 101) (also I did these under Erica' account on BH2 - hence the username haha).
The first image is down the barrel of the two flows (otherwise projecting the mass down the x-axis) thus the vertical axis is z and the horizontal is y.
The second image is a projection down the y-axis. Thus the vertical axis is x and the horizontal is z. This makes sense given that we've defined GxBounds = 0d0,0d0,0d0,62.5d0,75d0,75d0. The two flows are colliding along x, so in the second image, they are coming in from top and bottom.
In both images I've plotted the column density maps for min = 60 and max = 1000. I did similarly for the min/max of the streamlines which are plotted on top of the column density maps. I also checked that they are scaled by magnitude. Now after talking with Erica we are not sure if these streamlines make any physical sense if we have defined a magnetic field along the flow axis (i.e. x). Ignore the visit axis labels as they are generic and don't define the dimensions of our problem.
In our problem.f90 we have defined the projections for streamlines like so:
!For 'projected' streamlines plot of the data down x: CALL CreateProjection(projection) Projection%dim=1 Projection%Field(1)%iD=By_field Projection%Field(1)%component=BOTHCOMP Projection%field(1)%name='By' Projection%Field(2)%iD=Bz_field Projection%Field(2)%component=BOTHCOMP Projection%field(2)%name='Bz' !For 'projected' streamlines plot of the data down y: CALL CreateProjection(projection) Projection%dim=2 Projection%Field(1)%iD=Bz_field Projection%Field(1)%component=BOTHCOMP Projection%field(1)%name='Bz' Projection%Field(2)%iD=Bx_field Projection%Field(2)%component=BOTHCOMP Projection%field(2)%name='Bx'
So in Visit I defined a few expressions to be able to plot the streamlines. For down the x-axis (which correspond to the mass1 CDMs):
By_downx = array_decompose(projections, 1)
Bz_downx = array_decompose(projections, 2)
which you can create the expression for the vector Byz_downx = {<By_downx, Bz_downx>} to plot the streamlines like I have above. The first component corresponds should correspond to the right axis if the horizontal component is truly y. Thus the second component will correspond to z if the vertical is truly z. So I think I have these lined up correctly? For projections down the y-axis (corresponding to mass2 CDMs):
Bz_downy = array_decompose(projections, 1)
Bx_downy = array_decompose(projections, 2)
you can create Bzx_downy = {<Bz_downy>, <Bx_downy>}. Clearly from the size of the box we know the horizonal component is z, and thus the first parameter in our vector should be Bz. Similarly for the second being x. However the streamlines don't seem right? Not sure what is going on.