Ablative RT with Riccardo's initial profile
- Riccardo's initial profile and BCs Riccardo uses zero heat flux top boundary condition. hypre chokes due to the fast increased temperature at the top
Rho | ![]() |
T | ![]() |
P | ![]() |
Vy | ![]() |
- Riccardo's initial profile and non-zero heat flux at the top
The front holds stable around 1ns then is pushed up. It's pushed out around 2.8 ns:
Rho | ![]() |
T | ![]() |
Vy | ![]() |
Testing Report 02/252014
Jonathan, Baowei, and I have been working on updating the test suite. Baowei has been focusing on the buildproblem script, repository, and wiki implementation while Jonathan and I have been working on creating new testing routines. The goal is to no longer use bear2fix and give astrobear its own testing capabilities. Here's how it will work:
- We will now have a test object that handles error calculations and image production. I have created a wiki page that details how to use this object: TestObjects
- Each problem module in the test suite needs to have a few things
- a file named 'layout0000.dat' which contains verified data that new data gets compared to (this is instead of a reference chombo)
- a parameter 'ltest' in globabl.data (default is F) so that the buildproblem script can turn on/off testing
- in problem.f90, the test object needs to be created and certain parameters may need to be initialized.
- there should also be a reference image 'refimage.png' that we can use for visual verification
Meeting Update 02/24/2014 -- Baowei
- Tickets
- new: #347(CF memory errors on Bluestreak), #348(Asymmetric outflow), #349(Array Bounds mismatch in CalcSinkAcc), #350(Plotting artifacts due to visit / chombo files), #351(Pthreads = 2 breaks compiler on BS), #352(Better I/O for parallelization), #353(Use new table interface for all tables), #354(initializing fields to nan)
- closed: several old tickets(like #287, #289, #312, #313,…details can be found http://astrobear.pas.rochester.edu/trac/astrobear/blog/bliu02182014), #349, #352
- Science
- Ablative RT: #345. still waiting for the time scale with fixed gravity constant from Rui. checked Betti's Initial profile and BCs with AstroBEAR code, found the temperature at the top jumped to very high probably due to the piled up heat flux which chokes hypre (http://astrobear.pas.rochester.edu/trac/astrobear/ticket/345#comment:3). Jonathan suggested to extend the y longer with Betti's BCs. Working on that..
CND with outflow and magnetic field
This simulation includes the central outflow with the usual parameters, and an initially vertical magnetic field with a field strength of 10 microgauss. The first animation shows the density of the central 4 pc, the interaction of the outflow with the inner rim of the accretion disk causes a clumpy structure, as it does in the simulation without magnetic field that I have shown last week. The second animation shows the face-on beta parameter of the disk's inner region. As it is expected with such a weak magnetic field, the beta parameter is very high, so the dynamics are completely dominated by the gas and not by the magnetic field. The third animation shows the edge-on beta parameter, the outflow shapes the magnetic field into a more or less radial configuration.
Animation of the density, face-on, inner 4 pc of the disk
Animation of the beta-parameter, edge-on, with streamlines of the magnetic Field
Density, face-on, inner 4 pc of the disk:
Beta-parameter, face-on, inner 4 pc of the disk, with streamlines of the magnetic Field:
Beta-parameter, edge-on, with streamlines of the magnetic Field:
Science Meeting Update 02/24/2014 - Eddie
- working on polishing the 2.5D pulsed jets paper
- I was testing a 3D mach stem set up, and got it to run on bluestreak, but bluestreak is down for a couple days for maintenance so I can't access the data yet.
- I want to start reading up on jet launching. Christian Fendt, who I met in Heidelberg, has done some work on this so I'm looking at some of his papers. Here's one that focuses on the effects of magnetic diffusivity: http://adsabs.harvard.edu/abs/2002A%26A...395.1045F
Meeting 2/24
Continuing to work through Toro, but I also found this article onusing Stochastic methods to arrive at Riemann invariants. It's particularly interesting to me, because I'm also doing an independent study in Stochastic Calculus, unfortunately it's still a bit over my head but I would like to present on it in the future (once I'm a bit further in both books). In the meantime, this is a cool paper on N-body simulations of planetessimals in a binary system.
Meeting update -- Erica
Colliding Flows:
Hypre library none-global-partition worked in fixing memory leaky thing
Sims are progressing along nicely, see page for update, standard out predicting another 8 days at current framerate
BE-paper:
Referee report back. Good comments. Will be working through them and resubmitting this week.
Questions:
- Keto reference?
In-group Education:
I made a page on scaling for Bluestreak
Documentation:
Mini-group report
Outreach:
Horizon's program over the summer — teach underprivileged kids about computational astro. Possible to have them using astrobear???
Mini-group page
Testing Report 02/18/2014
- We only do weekly testing on bluehive for the official revision. Our latest official revision of AstroBEAR2.0 is 1245:6ea1a968486d, which was checked in on Aug 19th. So it's basically testing the old code over and over. We need to start testing our development version of AstroBEAR3.0.
- The current testing suite revision is 161 — for AstroBEAR2.0, which was checked in On July 23rd. This was copied to botwin when we moved the wiki from clover to botwin
- The testing script buildproblem needs to be updated: a) The testing results are saved to clover; b) but we lost access to /cloverdata/ from other local machines possible due to failure of one disk on clover.
- The personal current-tests page which you can check the testing result on different machines by [[CurrentTests( username_machine)]] need to be updated, as again the results are saved in clover and we lost direct access to the machine
- Baowei working on updating the testing script and wiki page as he needs to test Jonathan's fix to Marvin's ticket.
- Want to go over the modules that are currently tested, and make sure they are up to date. Also need to know which new modules we will have in 3.0, and which of those should be tested regularly.
Current Problem Modules Tested in 2.0
- BasicDisk
- BonnorEbertSphere
- Bondi
- BrioWuShockTube
- CorotatingBinary
- EinfeldtRarefaction
- FieldLoopAdvection
- FieldLoopRestart
- FillingFraction
- GravitationalCascade
- HydroWaves
- IsoHydroWaves
- IsoMHDWaves
- IsotropicTurbulence
- Marquee
- MHDWaves
- MolecularCloudFormation
- MomentumConservation
- MultiClumps
- OrbitingParticles
- RadiativeInstability0
- RadiativeInstability05
- RadiativeInstability1
- RadShock
- RTInstability
- SingleClump
- SlowMolecularCloudFormation
- SodShockTube
- ThermalInstability
- UniformCollapse
2.0 Problem Modules Not Tested
Module | Should it be tested? | |
---|---|---|
1DWaves | ||
2DWaves | ||
Binary | ||
Christina_Original | ||
CurrentSheet | ||
GaussDiffusion | ||
GravoTurbulence | ||
HydroStaticStar | ||
jets | no | (this is basically Martin's jet module which was renamed CoolingJet in 3.0) |
MTI | ||
MultiJets | ||
StreamDisk | ||
TrueLoveProblems |
New 3.0 Problem Modules
Module | Should it be tested? |
---|---|
PulsedJet | |
MachStems | |
CoolingJet | |
EmissionMaps | no |
Debug meeting report -- 02/18/2014
- Marvin's MHD disk
- Eddie's 3D Pulsed Jets
- Erica's Colliding Flows
- Baowei's Ablative RT
- Shule's Fedderath/Krumholz Accretion
Meeting Update 0217
- Laser Simulations
Simulation setup (2D):
Wtih 2 mm ablator (contour shows the target material tracer):
With 10 um ablator initial setup:
At 100 ns:
Wtih 10 um ablator, Added 8 Tesla vertical magnetic field:
At 100 ns:
Without ablator, hydro, single shock with the same shock speed (115 um/ns) initial setup:
At 100 ns:
Without ablator, hydro, single shock with half the shock speed (57 um/ns):
At 200 ns (the time slider in the image should be doubled):
Without ablator, added 8 Tesla vertical field, single shock with the same shock speed (115 um/ns):
At 100 ns:
- TSF
Revising the final version of the paper. Should be done by the end of this week.
- Mgnetized TSF
Since we've decided to tackle the magnetized TSF next, I have migrated the low res runs on Kraken to bluestreak. We should get the set of production runs we need after these runs are done.
Science Meeting Update 02/17/2014 - Eddie
Mach Stems
- all of the 2-D 80 cells/rclump runs completed on BlueStreak
- I did some estimates for the SUs required for 3-D runs: ehansen02122014
1-D Rad Shocks
- Anna made the comparison plot, and it showed that implementing the Raymond ionization table did not make much of a difference. There must be a different reason for why our models are different.
- I thought to check the time resolution, but when I looked at one of my models I found that the time step was less than a nanosecond. If I used the right units, I think typical ionization rates lead to a very rough estimate of about 1023 ionization events per cell per time step near the shock front.
Pulsed Jets
- I couldn't get visit to open the chombos on kraken, but I did get it to open the .bov files. Below are the emission maps for the hydro case (left) and the beta = 1 case (right).
hydro | ![]() |
---|---|
beta = 1 | ![]() |
Other
- thought about memory and run time scaling: ehansen02132014
CND with outflow - Marvin
The following animations show simulations with outflow object, but without magnetic field. The outflow has a radius of 0.8 pc, a velocity of 700 km/s and a density of 100 cm-3. I calculated the evolution for 5e4 years (my previous simulations were 1e6 years). As initial conditions I used data from one of my previous simulations, so we don't have to wait for the disk to relax from its initial conditions. I increased the resolution of the disk from level 2 to level 4 (now we have a resolution of 0.04 pc), so the disk still has to adjust to the new resolution. The first animation shows the edge-on view of the disk's density, I show only the inner 4 pc. Due to the increased resolution the disk becomes thinner, furthermore we see a "bow shock" that is situated at the inner rim of the accretion disk. The second animation shows the corresponding face-on view, like in the 2D simulation that I showed last week clumps are produced, but the density contrast is about a factor of 2 and thus much lower than in the 2D simulation (there the density contrast was 3 orders of magnitude).
Animation of the inner disk with central outflow - edge-on
Animation of the inner disk with central outflow - face-on
Disk with central outflow - edge-on:
Disk with central outflow - face-on:
Meeting update - Erica, 2/17/14
My main focus this week? Super-computing.
Page:
http://astrobear.pas.rochester.edu/trac/astrobear/wiki/u/erica/CFRunStats
Meeting Update 02/17/2014 -- Baowei
- Tickets
- Users:
- checked with Andy of Rice: AstroBEAR and Visit run well on Rice resources.
- set up Marvin on bluehive and bgq
- Erica's reservation on bgq
- Resources
- project & teragrid resources: ProjectRuns
- group reservation of half bgq machine for weeks
- link to cloverdata/ from other local machines fails possibly due to the failure of one disk on clover. Testing script and backup scripts need to be updated correspondingly.
- Science
- Ablative RT: aim at stable time 3~4 nano seconds according to LLE people. tried with different top BCs, hypre-choking problem fixed (details at #345). still need to make the front stay longer..
Memory and Run Time Scaling
It seems intuitive and obvious that the amount of memory used (M) should scale with the total number of cells . For a 3-D problem,
So if the total memory M is proportional to N, then it follows from the above equations that
You might also think that the run time
scales in the same way, but that is not quite correct. Memory usage is a function of how much information you have and run time is a function of how many calculations you have to do on that information. The single most important factor determining run time is how many time steps ( ) are required to update each cell. So it now makes more sense to write an expression like this:
The CFL condition is needed to calculate how the number of time steps scales:
where C is the CFL number and
is the maximum wave speed. C and can be ignored since they are constants, and then we can write
So if we have a run where we are trying to get to some final time
, we have to take a certain number of time steps.
Combining this equation with our previous equation for
, we get the final result:
Below is a very simple example that illustrates how these equations work. For simplicity, the 1 x 1 x 1 simulation has all of its values explicitly set to 1.
Grid | N | dx | dt | |||
---|---|---|---|---|---|---|
1 x 1 x 1 | 1 | 1 | 1 | 1 | 1 | 1 |
2 x 2 x 2 | 8 | ½ | ½ | 1 | 2 | 16 |
4 x 4 x 4 | 64 | ¼ | ¼ | 1 | 4 | 256 |
The scaling equations can also be generalized to problems of any dimension ndim.
Estimated SUs for 3-D Mach Stems
Here are some stats for the 2-D runs. These were all run on bluestreak at a resolution of 80 cells/rclump. I meant to run them on 128 cores, but I accidentally ran on 128 nodes which is equivalent to 2048 cores. This means that I should have better efficiency for my 3-D runs, so these estimates should be upper limits.
gamma | d (rclump) | run time (s) | SUs (thousands of CPU hrs) |
---|---|---|---|
5/3 | 20 | 33916 | 19.3 |
5/3 | 15 | 31027 | 17.7 |
5/3 | 10 | 23635 | 13.4 |
5/3 | 05 | 22282 | 12.7 |
1.40 | 20 | 34698 | 19.7 |
1.40 | 15 | 31184 | 17.7 |
1.40 | 10 | 29120 | 16.6 |
1.40 | 05 | 22086 | 12.6 |
1.20 | 20 | 41328 | 23.5 |
1.20 | 15 | 37236 | 21.2 |
1.20 | 10 | 32170 | 18.3 |
1.20 | 05 | 23586 | 13.4 |
1.01 | 20 | 52246 | 29.7 |
1.01 | 15 | 52459 | 29.8 |
1.01 | 10 | 47344 | 26.9 |
1.01 | 05 | 28754 | 16.4 |
Zcool | 20 | 63.9 | |
Zcool | 15 | 61.2 | |
Zcool | 10 | 54.9 | |
Zcool | 05 | 35.6 | |
TOTAL | 524.55 |
If I used the same filling fractions and extended these numbers to 3-D, then that would be like calculating the estimated run time for simulating rods instead of spheres. I went ahead and did it this way anyways, because the grid in the z-direction for these problems is relatively small compared to the other dimensions. So these estimates should be an upper limit.
To get the estimate for the 3-D runs, we need to compare the expected number of cells updates per root step.
2Dupdates = N0*(1 + 8*ff(0) + 64*ff(0)*ff() + 512*ff(0)*ff(1)*ff(2) + …)
3Dupdates = N0*(1 + 16*ff(0) + 256*ff(0)*ff() + 4096*ff(0)*ff(1)*ff(2) + …)
These can be written more generally as sums:
where n is the level, N_0 is the total number of root level cells and ff(i-1) is the filling fraction of level i-1. Note that ff(-1) = 1. It is also important to keep in mind that 3-D cell updates are approximately twice as expensive as 2-D cell updates, so…
3DSUs/2DSUs = 2*(3Dupdates/2Dupdates)
After using the above equations, this is what I get for the 3-D runs:
gamma | d (rclump) | SUs (millions) |
---|---|---|
5/3 | 20 | 22.3 |
5/3 | 15 | 20.3 |
5/3 | 10 | 15.5 |
5/3 | 05 | 14.8 |
1.40 | 20 | 22.5 |
1.40 | 15 | 20.3 |
1.40 | 10 | 19.0 |
1.40 | 05 | 14.6 |
1.20 | 20 | 26.8 |
1.20 | 15 | 24.1 |
1.20 | 10 | 20.8 |
1.20 | 05 | 15.5 |
1.01 | 20 | 33.7 |
1.01 | 15 | 34.2 |
1.01 | 10 | 30.8 |
1.01 | 05 | 18.7 |
Zcool | 20 | 74.2 |
Zcool | 15 | 71.5 |
Zcool | 10 | 63.9 |
Zcool | 05 | 41.6 |
TOTAL | 605 |
Clearly this is not feasible. Even if I could account for better efficiency in 3-D and perhaps lower filling fractions, I don't think I will be able to do all of these runs at this resolution. If I decrease the resolution to 40 cells/rclump then my total drops to 54.78 million SUs. I can improve this further by decreasing my domain in the x-direction and imposing periodic boundary conditions. I have to alter the resolution slightly to do this, so this would be for 38.4 cells/rclump. For this set up, my total drops to 35.56 million SUs. To summarize:
Resolution (cells/rclump) | Periodic BCs? | Total SUs (millions) |
---|---|---|
80 | no | 605.0 |
76.8 | yes | 390.9 |
40 | no | 54.78 |
38.4 | yes | 35.56 |
It is important to remember that these are upper limits, and there are several factors that will bring these numbers down:
- The 3-D filling fractions are probably a bit lower than what I used.
- The efficiency should be better for the 3-D runs. The 2-D runs typically had efficiencies of 15%-60%. In a perfect scenario, my efficiency would be 100% which is a significant increase.
- These estimates are for BlueStreak. Kraken, for example, has processors that are approximately 1.625 times faster (2.6 GHz vs. 1.6 GHz).
If I do some very rough estimates to try to account for these things, I can get the total SUs down to 9.85 million. Perhaps it would be possible and reasonable to do a subset of the runs at this resolution on Kraken. Below, I computed the average SUs and run time required for a run with 2048 cores on BlueStreak and Kraken.
Machine | SUs (thousands) | Run Time (days) |
---|---|---|
BlueStreak | 800.2 | 16.3 |
Kraken | 492.4 | 10.02 |
With run times this long, we run into the issue of having to do restarts due to the limited wall time on these machines. This will bring the efficiency down and increase the SUs and run time required. There is also the extra time spent waiting in the queue which probably doubles the time it takes to get a run completed.
Mesh/patches on different numbers of processors
Images of the mesh with patches overlaid for comparison between bluehive (64 procs) and bluestreak (512 procs).
If each patch only contained cells of a given AMR level, then it is easy to see why more procs → more patches → smaller region of highest level refinement (as in bluestreak case) — that is because once a physical region in the grid is marked for refinement (say to highest level-5), then each patch that overlays that flagged region will be forced to contain level 5 cells. The more processors used to do this, the higher the 'resolution' of that level 5 region would be.
The legend on these plots show that patches can contain different levels, but it not clear from these plots if these subsets of the patches (the different levels) must be contiguous.
Would be nicer to see this image colored only to patch number, rather than level + patch.
By intuition, it seems that the physical spaces and hence regions that are flagged for refinement to level 5 should have a 1-1 correspondence between machines. If this isn't the case, as these images suggest, then is the difference attributed to compiler differences or different no. of processors?
Bluehive;
Bluestreak;
Science Meeting Update 02/10/2014 - Eddie
- ran a 1D shock model with a Raymond H ionization table, sent the data to Pat, and am waiting on comparison results
- the 3D MHD pulsed jet run on kraken slowed down quite a bit, so I took a different approach
- I'm trying a very low resolution run, and then I will do a restart at a higher resolution to get some high res late frames
Mach Stems
- below I am showing the d = 5 runs from left to right: Z cooling, and gamma = 5/3, 1.4, 1.2, 1.01 respectively
- on bluestreak I have completed 6 out of 20 of the runs at the 80 cells/rclump resolution. The remaining 14 are still in the queue, but should run and finish this week.
CND with outflow - Marvin
To achieve a proper resolution of the central outflow I increased the refinement level in the center from 4 to 5 (corresponding to a resolution of 0.02 pc) and the thickness of the outflow to 0.3 pc (inner radius: 0.5 pc, outer radius 0.8 pc), corresponding to 15 cells. The grid layout is shown in the first picture. The first two animations (first: density, second: velocity) show a simulation with outflow object and gravity object embedded in an ambient medium (in 2D without disk and magnetic fields) and shows that the outflow is well resolved.
I did another 2D simulation with disk, cooling, outflow and gravity object, the last animation shows the inner 4 pc of the face-on density of the disk. Initially the disk has an inner rim of 1 pc, the outflow causes instabilities along the inner rim that lead to the formation of clumps. The clumps have sizes of approx 0.1-0.2 pc and masses of approx. 50 solar masses.
Outflow embedded in ambient medium: animation of the density
Outflow embedded in ambient medium: animation of the velocity
Outflow with disk in 2D: animation of the face-on density
Grid Layout for outflow simulations:
Clumping due to disk-outflow interaction:
Meeting Update 02/10/2014 -- Baowei
- Tickets
- new: #335(stray momentum flows), #336(Compiling error on bamboo and bluehive with hypre flag = 0), #337(Memory usage), #338(fix comment in scrambler 3.0 in refinements.f90), #339(Making astrobear capable of using dependencies), #340(Organizing modules in the source code), #341(Difference between colliding flows and molecular cloud formation), #342(compiling error on bluestreak)
- closed: none
- Users
- Mark: XSEDE startup allocation: stampede/kraken
- New one asking for the code: Yunnan University(to simulate problems of AGNs or SNRs)
- Resources:
- XSEDE: 1.4 million SUs left on Kraken.
- Worked on
- Ablative RT (#317): With Shule or Betti's BCs, it can run 1E-10 seconds before hypre chokes. By fixing the values of top right boundary, it runs up to 6E-9 seconds with oscillating front: #331,comment:22 —is this long enough?
- QPBC(#317): summary of what I tried:
- The divergence comes from Az — got different values running with multi-processors
- Run with 1, 2 processors, vector potential values are same.
- Run with 3,4,5 processors, vector potential values are same.
- new subgrid scheme with minimum grid number=1,2,4: vector potential values are same as the old subgrid scheme. But for minimum grid number =8, values are different
- Only happens with AMR runs
- #336(Compiling error on bamboo and bluehive with hypre flag = 0)
Meeting Update Feb 10 2014
- Laser stuff: The rad transfer seems to be working fine. Here's what I got after applying 1e14 W/cm2 rad flux at the left surface of the simulation box for 1 ns:
At 2 ns, It becomes:
Note that the heat up is lower than that in Peter's simulations, where the region just in front of the target is at about 0.03 KeV. To achieve the same temperature, we need to apply the flux for more than 50 ns, in which the temperature of the target gets heated up way more than that in Peter's simulation.
We will apply shocks to this initial set up (the 2D runs are several hours worth on 8 cores so we should be able to get the results we want in 1 day or 2). The current shock speed I got from the set up is about 100 um/ns when it reaches the target. It'd be nice if we can know the pressure that drives the shock so we can simulate the driving directly.
Also, I have started revising the resistive shock clump paper. My goal is to get it submitted by the end of April.
- TSF: finished revising the paper. There's some minor tweaks needed, but we are nearly done with this one. I have been reviewing some of the simulation papers by Boss and Vanhala last week, I think the next paper should be investigating the parameters space for the disk formation: two sets of runs:
Mach ranging from 5 to 40 with K = 0.1 to study the impact of M on disk formation;
Mach 10 or 20 shock (this is what they proposed to be able to achieve injection) with K ranging from 0 to 1.
We should bring mixing into the focus as it is one of the key reason why people are interested in this problem.
From my previous paper, these two sets should total about 1 mln hours of cpu time on kraken.
The other two ideas are (1) magnetized TSF, as we've seen interesting results from the prototype runs. (2) to study different sink particle algos in the context of TSF.
Meeting update - Erica, 2/10/14
Group related stuff:
I make a motion to move the group meeting to another day, one less busy with meetings. If this motion is passed, I volunteer to help with organizing another time that works for everyone. The reason I am proposing this switch is that Monday is quite stressful/hectic with all of the meetings - 11:30-12 grad students meet with speaker, 12-1 Pizza Lunch, 130-3 group meeting, 3:30-5 lecture. I personally find it not-optimal to prepare for our Monday meeting under these circumstances. And on another personal note, I think Friday meeting would be best - to wrap up the week with updates, and there are no meetings/classes on this day.
Science Stuff:
Colliding Flows
Am attempting to optimize the wall-time of the runs on the different local clusters. Currently, the 2 different runs I have made decent progress on are the 15 and 30 shear angles cases. These have been restarted and moved around from bluehive to bluestreak to find a sweet spot. Here are some findings.
In 1 day on bluestreak (512 processors), the restart produced 7 new frames.
The same number of frames were produced on bluehive (64 cores, standard queue) but over 3 days.
The restart frame is 32.
Comparing frame 33 on bluehive and bluestreak:
Mesh on Bluehive:
Mesh on Bluestreak:
Rho on BH:
Rho on BS:
Computing
Worked on getting baseline estimate of astrobear -
http://astrobear.pas.rochester.edu/trac/astrobear/wiki/u/erica/memoryBaseline
Found out there are only 96 cores available per user at any given time on bluehive, only 24 hrs allowed on bluestreak and max is 48 hrs with reservation. I asked CIRC about bluestreak, and got this email, thought relevant to share with group:
Erica,
The queue time limits on BlueStreak are to ensure that everyone gets a share of the machine and a little bit to force people to check point their code. Month long run times run the risk of something happening before the job finishes. If the job is not check pointed and restartable, all the run time is lost if the job and or the node crashes. Also, if the job is in an infinite loop the whole month's worth of run time would be lost. So the time limits are short to ensure higher turn over of jobs and good programming practices.
The scheduler is configured to give all users a fair share of time on the BG/Q. So people that have only occasional jobs don't get shut out by people that continually shove lots of jobs into the queue. The share is based on the number of nodes multiplied by the run time of jobs over the past 2 weeks. The fair share and wait time in the queue are the primary factors in setting priorities in the scheduler.
Your jobs have been an exception that I am still trying to figure out. They should be starting in less than a day but they have been taking days to get started. We recently updated to a new version of the scheduler so I would be interested to see if this fixes the problems with the 256 node jobs.
All of this can be overriden with a reservation but there are two problems with that. One is that the smallest reservation we can make is half the machine. So your code would need to scale to 512 nodes or maybe run as multiple jobs in parallel to make use of all the nodes.
The bigger problem is how to justify dedicating half the machine to a single person for a month or two. This is a shared resource for the UR and URMC research communities and we try to be fair in allocating time to all the users.
I hope this explains some of our reasoning behind our policies with regard to the BlueStreak. Feel free to contact us if you still have questions.
Carl
Documentation Stuff:
When do you all want to get together and talk about documentation?
Meeting Update 02/03/2014 - Eddie
Mach Stems
- finished the low res (20 cells/rclump) runs for stationary bow shocks with different gammas and separation distances
- gamma = 1.01, 1.20, 1.40, 5/3 with separations of d = 5, 10, 15, 20 for a total of 16 runs
- currently running new set of sims with Zcooling (set gamma back to 5/3)
- will visualize and post images/movies of all the completed runs this week
- also this week, I will move to bluestreak and run the same simulations with 2 more levels of refinement to give me production runs at 80 cells/rclump
- finally, I will also start testing low res, 2-D runs with a moving bow shock
3-D Pulsed Jets
- hydro simulation on Kraken finished, and beta = 1 is chugging along
- beta = 1 has a few restarts, but it is still producing more frames than previously
- it is currently at frame 40, and I think getting it to frame 50 will be good enough
- I hope to create the emission maps for these runs as soon as possible so Pat can use them for the HST proposal
Cooling Issues
- looked at the Raymond code and found some discrepancies in the ionization and recombination rates
- not sure if this discrepancy is large enough to account for what we've seen in our 1-D radiative shock models
Other
- wrote an abstract for upcoming HEDLA conference
- need to proofread my 2.5-D pulsed jets paper again
- moved office around so now we can make use of Adam's old futon
Meeting update - Erica
- Working on understanding memory usage on the big machines, and putting together a proposal for job resources to share with Jonathan (W?)
http://astrobear.pas.rochester.edu/trac/astrobear/wiki/u/erica/memoryBaseline
- Got added to Kraken account, will consider benefits of moving sims there
- Found some interesting papers last week, http://arxiv.org/abs/1401.7648, http://iopscience.iop.org/0004-637X/779/2/121/, http://arxiv.org/abs/1401.6096
Specifically for the last paper, -High resolution sims (~25-45 AU dx) -Radiation transport, MHD, self-gravity, outflow feedback, self-consistent turbulent initial conditions -Sub grid model (Cunningham 2011) for outflow feedback: assigns some fraction of the flow to be ejected from the sink along the sink's angular momentum axis, at some fraction of the keplerian speed. This uses the mass of the sink and a radius of the sink — although not certain the radius they are taking for the sink..
Meeting Update 02/03/2014 -- Baowei
- Ticket
- new: none
- closed: none
- Resources
- grass is on.
- Worked on
- Ablative RT(#331): fixed bugs related to restart and multi-core running. flux testing with both hydro & hypre passed (0.1% difference from the ideal case). Revisited the time scale with the corrected dimensions: target final time should be 1E-8: Hypre chokes around 2% of this time: http://astrobear.pas.rochester.edu/trac/astrobear/ticket/331?cnum_edit=14#comment:14 — possibly due to the top-boundary conditions. Tried with 1st-order in time CN scheme [ThermalConduction] but won't help much
- Quasi Periodic boundaries in a quarter plane(#317): haven't got much time working on this but the large divergence seems coming from the Az, x&y directions are OK: http://astrobear.pas.rochester.edu/trac/astrobear/ticket/317#comment:10
Including an outflow in the CND - Marvin
I am currently working on including the outflow from the central stellar cluster in my simulations. The standard setup that came along with the module that I am using is shown in the first animation (edge-on view, the color code shows the absolute value of the velocity in km/s), we see that we have an inflow rather than an outflow, it seems that the outflow is not able to escape from the central region. The outflow velocity is set to 700 km/s, the escape speed is 200 km/s.
Then I did some test runs that consist only of an outflow object embedded in an ambient medium. The second animation shows a run with the standard parameters (the outflow has a thickness of zero). The outflow reaches a velocity of 140 km/s, far below the chosen 700 km/s. Increasing the thickness to 0.1 pc (third animation) leads to outflow velocities of around 700 km/s, but there are still some numerical artifacts, probably due to an unsufficient resolution.
So I am currently trying to find adequate values for the resolution and for the thickness of the outflow that yield the correct outflow velocity and decrease numerical artifacts to a reasonable level.
Animation of edge-on view of the absolute velocity
Animation of an outflow with thickness 0
Animation of an outflow with thickness 0.1 pc
Edge-on view of the absolute velocity:
Meeting Update Feb 3 2014
- The Berkeley Group sent me the project proposal upon my request. Most of my time last week was spent on reading the proposal along with documentations (implementation and performance) on ORION code. I'll briefly talk about it during the meeting. (not sure if it is OK to post it online)
- The new NLUF simulation is setup as below:
The radiation flux on the left side of the box (orange zone only) should launch a shock of 115 um/s. The shock front would be at a temperature of 20 ev with a precursor of width 1mm when hitting the target. The target and the ambient are both at room temperature. The sound crossing time for the target is in the ms time scale, which is much longer than the supposed simulation time scale (less than 1 us) so the pressure balancing should not be an issue.
The vertical magnetic field about 6 Tesla based on the latest design.
- TSF: Currently working on Adam's version while waiting for Eric's comments.