Completed mass query for theta = 0 case
Figure. A mass versus time plot for the four sinks that form in the case of CollidingFlows, along with M_Simulation (the total mass in the simulation box), and M_Collision (the total mass in the collision region within the box).
- Time interval is for ~13 Myr.
- Mass axes is logarithmic in order to better illustrate the trend. A linear axes lets the sink masses look close to zero in comparison to the mass in the box and cylindrical collision region.
Followed a similar procedure as to what we did for the E vs. t line plots, however did a query for rho instead.
I did the query using both VNC and the interactive queue on both Erica's and my own account on BH2. For the first 200 frames of the M_simulation query, Clover collected most of the data based on the chombos stored locally. Surprisingly, Clover was faster than both VNC and the interactive queue. Ultimately both of these types of remote visualization settings are really unreliable and result in a lot of babysitting. It took me about 3 days to collect all the data for the box and cylindrical collision region in Visit. Here are some issues & interesting things I encountered:
- Want to use a GPU? Need to use VNC which has a GUI. GUIs are super flaky and prone to time out errors it seems. Here is the command to run visit with a GPU though:
module load virtualgl visit/2.7.3 && vglrun visit
- Attached is the script I used for use VNC. CIRC Website also has some stuff on remote visualization (using VNC). It might be faster and nicer to use on data that isn't too demanding on memory.
- Wanted to query using the interactive -p standard for say -t 300. Whenever I tried to do this, after Visit collected data for a few frames, it would time out. Seems like there were some memory issues. So I just stuck to an hour in the debug queue and monitored the memory/cpu percentage on the node I was on. Here is a website explaining how to do that. This implies I can only query for approximately 10 frames per interactive job.
- Apparently using query on data that utilizes the cylindrical clip operator requires more memory than just query for the total mass in the box. Visit is probably doing extra work. Just an FYI.
Jonathan suggested making a post processing element in astrobear that'll just spit out all the data into a curve file during a batch job. Think if we want these for the three other runs I will just do that…
Moral of the story: Using query in visit for large data sets is finicky, be careful!
VNC Script
Notes:
- Sign into your machine/local machine.
emacs vnc_connect_linux.sh -nw
- Paste script in.
- Make sure it is executable and run it.
- It should prompt you to sign into BH2, and for how long you want your session to be, along with the size of the GUI, etc.
- Hit enter when it says a password is found. (FYI) You'll have to make an extra password for your account too when it establishes the tunnel.
#!/bin/bash -i via=bluehive2.circ.rochester.edu #TurboVNCDir="/opt/TurboVNC/" #vncviewer=$TurboVNCDir/bin/vncviewer vncviewer=vncviewer read -p "Please enter your netid: " user read -p "Do you need to start a VNC server? [y]:" vnc_start read -p "Set a timeout for your VNC server [60]:" vnc_timeout read -p "Choose a resolution [1280x1024]:" vnc_resolution if [[ -z "$vnc_timeout" ]]; then vnc_timeout=60 fi if [[ -z "$vnc_resolution" ]]; then vnc_resolution="1280x1024" fi if [[ $vnc_start =~ ^[Yy]$ ]] || [[ -z "$vnc_start" ]]; then echo echo "Now connecting to bluehive and starting the " echo "VNC server." ssh $user@$via "vnc_start -t $vnc_timeout -g $vnc_resolution" # | grep "vncserver running on " | cut -d " " -f 4 fi read -p "Please enter server (ie bhx0101:1) " server host=`echo $server | awk -F ':' '{print $1}'` display=`echo $server | awk -F ':' '{print $2}'` port=$(expr 5900 + $display) echo "Establishing ssh tunnel" TMPSOCK=`mktemp -u XXXXXX.control` ssh -fN -o ExitOnForwardFailure=yes -M -S /tmp/$TMPSOCK -L $port:$host:$port $user@$via echo "Launching VNC viewer" $vncviewer localhost:$port echo "Disconnecting ssh tunnel" ssh -S /tmp/$TMPSOCK -O exit $user@$via
Core mass script documentation
Toward the end of last week I worked on writing a script that would open, read and write the mass data in the sinks_*.okc files to a new .csv file (In particular for the CollidingFlows problem, see our data here: CollidingFlowsFigures). The purpose of this was to gain all of the sink data over time, so that we could visualize it, as you can see by the results I have posted below. These charts will allow us to observe how the sinks accumulate mass over the time of the simulation. Here I will document the development of my code, and discuss what else I would like to do with it.
Objectives
Editing code:
- Write information of the density/mass of the cores to a .csv file
- Convert the information from the .csv into solar masses and Myr.
- Take converted .csv file and make graphs using matplotlib, or excel. Excel is quick, but a code that can do all of this and generate plots in one swoop would be super efficient for later uses.
- Document and post concise and general form under the VisIt page as a script tab.
So far I have completed the first bullet~ (02-09-2015)
Editing charts:
- Convert mass to solar masses.
- Convert time to Myr.
- Crop the x-axes to when the first sink forms. Get rid of the 0 values.
- Fix x and y-axes to be the same for all the runs.
okc_to_csv.py v. 1
Screen capture of code | Screen capture of .okc file |
The code reads the first line of the .okc file, splits the numbers into a list of ints. It then uses those numbers in order to access the data below that starts at the 34th line. The only parts that are hard coded are the headers for the columns of the csv file (L17) and the number of lines of data it has to read into the .csv (L27). Essentially you change this by the number of sinks that form by the end of the simulation. You can check this by counting the number of lines of data that are spit out by the final frame of the simulation.
Results 02-09-2015
Shear 0 | |
Shear 15 | |
Shear 30 | |
Shear 60 |