Meeting upate
3D low res MHD Shear Flows page, updated with movies:
http://astrobear.pas.rochester.edu/trac/wiki/u/erica/LowResMHDShearFlows
Christina's high res runs finished
Paper accepted to ApJ
- Submitted paper to ApJ
- Found very useful tool for visualizing latex diffs - appropriately named latexdiff
- See marked up pdf
- Plan to look at single temperature radiation solver this week.
Science Meeting Update 03/31/14 -- Baowei
- Ablative RT the ablation results of smaller extended zones look OK according to Rui (3:ticket:377, 5:ticket:377). Didn't get the bubble from 2D runs. Will have a meeting with Rui tomorrow discussing doing a RT simulation to benchmark the growthrates and bubble velocity.
Meeting Update Mar 31 2014
Triggered Star Formation on Contained Poloidal Field with Mach = 1.5, Beta = 3
http://www.pas.rochester.edu/~shuleli/mtsf/polar3.gif
Comparing to the Beta = 12 and 6 cases:
http://www.pas.rochester.edu/~shuleli/mtsf/betacomp.gif
We definitely see more "eruption" from the center of the core due to the field compression near the star. I will go on to production runs with different contained field cases adding rotation.
Triggered Star Formation on Uniform Field with Mach = 1.5, Beta = 6 (no star formed)
http://www.pas.rochester.edu/~shuleli/mtsf/uniform6.gif
It could be interesting to study the triggered collapse (how likely, and the mixing) of magnetically subcritical cores where Mass to Flux ratio:
Here we don't need BE sphere or rotation, just cores with the right mass (~ solar mass), uniform magnetic field (1 ~ 10 uG) and an array of different Mach and field orientation.
One T rad-transfer vs not
Science Meeting Update 03/31/2014 - Eddie
- Working on high resolution 2-D mach stem runs. Ruka has 16 of them, and I have 4. The 4 that I have are zcooling, so they take a bit longer (3 submissions per run on bluestreak). They will probably be completed sometime next week, but it depends on the queue times.
- The 3-D pulsed jet run is coming along, although it seems to be slowing down quite a bit. I have noticed that the slow down is not primarily due to restarts. The last 24 hr run that went through generated only 3 frames, and there was only 1 restart within that run time. If I can only get a few frames every 24 hrs, then it might be time to switch tactics as I would need about 500,000 SUs to complete the simulation. Here's the density image/movie of what I have so far:
- Need to look back on some old stuff about zcooling and my 1-D radiative shocks, so that I can write an informative email to John Raymond.
- Will try the emission analysis for my 2.5-D pulsed jets on the OI line like Pat suggested. I built that line into the code a long time ago, so it's just a matter of turning a switch on and rerunning astrobear in post-processing mode which should not take very long.
- I am also working on updating all of the problem modules for the new test suite.
Magnetic diffusion into the outflow object - Marvin
In the last weeks we were facing the problem that the magnetic field diffuses into the outflow object and is amplified there to considerable field strengths.
Here I show two more tests, both animations show the face-on view of the absolute value of the magnetic field strength in Gauss. The inner black circle marks the radius of the outflow object, the outer black circle marks the initial inner rim of the accretion disk.
Animation of the face-on magnetic field strength, outflow with increased ram pressure
Animation of the face-on magnetic field strength, outflow with inner radius set to zero
In the first animation I increased the ram pressure of the outflow by a factor of 8, so that the inner rim of the accretion disk does not reach the outflow object anymore. Nevertheless the magnetic field moves into the outflow object and is amplified to field strengths of more than 10 microgauss.
In the second animation I set the inner radius of the outflow object to zero, so there is no longer a central region where the velocity is set to zero. There is still a magnetic field present in the outflow object, but with very low field strengths of about 10-11 Gauss. I don't think that these have an effect on the results, so in future simulations I will just set the inner outflow radius to zero to avoid this problem.
Face-on magnetic field strength, outflow with increased ram pressure:
Face-on magnetic field strength, outflow with inner radius set to zero:
More solvers!
I've spent the majority of my time last week troubleshooting my adaptive Riemann Solvers, which are now working (see attached for pretty pictures). The HLL solver is almost finished. I'm optimistic about getting them fully functional in the next day or two. Where to next?
On Mach Stems: 1/16 complete, 2/16 ~60% finished after 1 day of runtime, currently in queue to restart, 13/16 still to be run.
In other news, I just started a CourseEra class of on high performance scientific computing taught by Dr. LeVeque at University of Washington. It focuses on parallel processing and associated algorithms in Python and Fortran. May be of some interest to other in the group- https://class.coursera.org/scicomp-002 , it looks like it should be a lot of fun!
Stellar Winds
This post is mainly intended to support my contemplation.
Theoretically, winds can't be isothermal throughout the space. It will cool and condense to other matters. Dusts ramp into ISM and gas and and finally drag force win radiation force (hypothesis) then lost its momentum quickly. Then driven force on gas vanishes.
Blue: Isothermal continuous dust driven wind. Driven force applied to everywhere.
Yellow: Isothermal continuous dust driven wind with driven force turned on at certain radius (0.8 numerically) and extend to infinity.
Green: Non-isothermal continuous dust driven wind with drive force turned on at 0.8 and extend to infinity.
Red: Non-isothermal continuous dust driven wind with drive force turned on at 0.8 and turned off at r=5.
Blue and yellow line coincide at large radius. Here is the enlarged [0,5] radius.
Potential I used: driven force turned on at 0.8 and off at 5.
Temperature profile I used: for non-isothermal case.
Conclusion: Wind material can fall back as shown by red line. Its velocity will converge to 0 (solution blow up at this point) and physically start to fall back. We can guess it will fall back to the outer potential well and stay there. To push one step further, materials will condense there and forming rocks, asteroids and so on?
Code Meeting Update 03/25/2014 - Eddie
Pulsed Jet Restarts
My latest run is just finishing on Kraken. It made it to 30 frames in 24 hours on 960 cores. There were 10 restarts triggered, 9 of which were high CFLs. Only one of them was the restart due to nan in flux. It looks like one of the toughest points was going from frame 24 to 25 which had 2 restarts and took almost 4 hours, but it eventually made it through.
I expect the frame output rate to slow down over time since the jet propagates into the grid and triggers more refinement. But the restarts make the run go even slower. It is possible that I could get this run to finish as is, given enough cpu hours, but I'm not sure how much we want to invest.
Right now, a good frame step that has no restarts, takes about 1.5 hrs. In the beginning, they only took about 10 minutes. If I assume that this trend will continue linearly, then at best the rest of the simulation would take 220 hours which is a little more than 9 days. With this many cores, this is 211,200 SUs. This is a best case scenario estimate. These numbers will increase due to triggered restarts, the need to physically restart the code every 24 hours will lead to inefficiency, and also the frame output rate probably decreases faster than linearly.
I have not yet had a chance to analyze the chombos from this run, but I attached the standard output to this post.
Czarships
Testing
We are still in the process of completing the new test suite which no longer uses bear2fix. I believe all the routines are in place, and now it is just a matter of setting up each of the problem modules to be compatible with these new routines.
Local Resources
Visit has been working very slowly on my machine (Grass). If I work on clover, ssh to grass, and then run visit, it is much faster. So I'm guessing that the bottleneck is on Grass' graphics card, and thus it has difficulty displaying things locally.
I don't know if this is something Rich can fix, but I think it might also be time for a new machine. Can we get the ball rolling on figuring out what we want and purchasing it?
Meeting 3/24
I'm now helping Eddie run his high res Mach Stems on BlueStreak. They look like they will take ~30 hours ea. on 128 nodes + queue time.
I've written in my approximate Riemann Solvers and reconstructed my code so that it's easier to swap in different solvers in the update stage. Currently, it has the solvers which approximate the star states:
- Exact
- Primitive Variable
- Two Shock
- Two Rarefaction
and then options that cycle through those routines depending on the state U:
- Exact Solver
- Adaptive Iterative
- Adaptive Non-Iterative.
Unfortunately I'm running into some bugs where my code will crash about halfway through the run, which I think may be related the routine to find Smax (the largest wavespeed) in order to calculate the time step. I'm going to get that ironed out this week and hopefully implement the HLLC solvers in the Flux calculation portion of the code as well.
Interaction between outflow object and magnetic field - Marvin
I show here a simulation of the CND with central outflow object and with an initial toroidal magnetic field of 0.1 milligauss and plot the face-on inverse beta-parameter in the first animation and the edge-on inverse beta-parameter in the second animation. The inner black circle marks the outflow object, the outer black circle marks the initial inner rim of the accretion disk.
As we have seen in last weeks simulations, after some time a magnetic field develops in the outflow object, although this shouldn't be possible. The beams that we see in the edge-on view after some time are probably created because the magnetic field that apperas inside the outflow object is carried outwards by the outflow.
The last animation shows the face-on divergence, it is always below 10-15 Gauss/parsec. So there are no magnetic field sources, maybe the magnetic field is transported into the outflow object.
Animation of the inverse beta parameter, face on
Animation of the inverse beta parameter, edge on
Animation of the divergence in Gauss/parsec, face on
Inverse beta parameter, edge-on:
Divergence in Gauss/parsec, face-on:
Science Meeting Update 03/24/2014 - Eddie
- I have Ruka helping me do some mach stem runs. Specifically, we are working on the 2-D, high resolution runs (160 cells/rclump). When those are finished, we will move on to 3-D.
- I again worked on the emission map analysis for my 2.5-D pulsed jets, but the results did not look very promising. It seems that the FWHM calculation does not support our interpretation of the morphology of the jet and its internal working surfaces. We might be able to do something with total emission.
- I have a new pulsed jet run in the queue on kraken. This is 3-D, beta = 1, with random velocity pulses.
- There is still some work that needs to be done on the test suite which I didn't get to last week. I will focus more attention here and get this done soon.
Simulations with outflow object - Marvin
To investigate why the inner rim of the CND is unstable when adding an outflow object I did two simulations: One with outflow object but a outflow velocity of zero, and one with an increased ram pressure. The inner black circle in my animations marks the outflow object, the outer black circle marks the initial inner rim of the accretion disk.
Animation of the Surface Density, inner 4 pc of the disk, outflow velocity = zero
Animation of the Surface Density, inner 4 pc of the disk, ram pressure increaed by factor 8
The first Animation shows the surface density of the disk in a simulation with a zero velocity outflow object. The inner rim of the disk is stable, as we have seen in the simulations without outflow object.
The second animation shows the surface density of the disk when the outflow object has an increased ram pressure. I increased the density and the velocity of the outflow by a factor of 2, respectively, so the mass flow is increaed by a factor of 4 and the ram pressure by a factor of 8. We see that with these parameters the outflow is strong enough to prevent the inner rim from collapsing, but not strong enough to push the material outwards.
Surface Density, inner 4 pc of the disk, outflow velocity = zero:
Surface Density, inner 4 pc of the disk, ram pressure increaed by factor 8:
1D Planetary Models
I switched to 1D - and set the ambient density to 1e4 what it was. However I still got odd behavior. After looking at the hypre matrices and vectors, I discovered that there was a typo in the code…
MinTemp*TempScale
should have been
MinTemp/TempScale
If TempScale is 1, this does not matter, but when it is 1e6 it creates problems
In 1D there is no 'self-gravity' but the simulations are able to evolve on a dynamical time scale.
2D results
Science Meeting Update 03/24/14 -- Baowei
- Ablative RT with adjusting gravity
- Met with LLE people last week. worked on putting adjusting gravity to the code. 1st cut results are show here 1:ticket:377 . The shell keeps stable about 4 ns. The gravity is not quite accurate especially when the front density drops due to ablation because of the extended zones.
Stellar Winds
Red: Radiation driven wind
Blue: Without driving force
Singularity of isothermal, radiation-dust driven wind
Nature of dust driven wind - momentum coupling
Gas momentum equation
Dust momentum equation
: Drag force on gas
: Drift velocity
: Radius of dust grain
: Sound speed of gas
We can argue that as dust grain moving through gas, drag force can "immediately" let dust velocity relax to a proper drift velocity with respect to gas. Therefore it is approximately quasi-static everywhere and drag force should be balanced by radiation force and any other forces. Usually radiation force and drag force can be balanced.
Where
is radiation pressure mean efficiency, it consist of Planck mean absorption efficiency and scattering mean efficiency and is dependent on the properties of light and dust grain. Such as radius and mass of dust grain, constituents of dust grain, wavelength of light and light spectroscopy. Further, formation of dust grain is relevant to the stage of the star.luminosity of the star
At last, equate drag force and radiation force will give:
Meeting Update 03/18/2014 -- Baowei
- Tickets
- Users & Resources
- Wiki updates: trying to update the trac with new plugins but generated some problems for our users this passed weekends and yesterday. Sorry about that. It works now. Rich, Jonathan and Baowei will have a meeting on Thursday to discuss the trac issues
- XSEDE proposal writing telecon: Time: 3/21 Fri 3:00pm ET, Location: Adam's office
, need questions
- Science
- Equations for the Ablative RT initial profile: #345
- Read Betti's paper (Growth rate of the ablative RT instability in ICF Phys. of Plasma 5, 1446 1998)
Meeting update
The shear hydro runs are continuously getting larger and slower. They are at frames 175/200, 165/200, and 158/200. I am finding myself constantly running out of storage memory on the cluster and locally, so am trying to clear out space as I go. I will move them to Kraken to finish them up and use up some of our cpu hours there.
For MHD: I have finished the Shear 15, Shear 30, Shear 60 runs at 40 + 2 levels with beta = 1, with uniform field in the x-direction (parallel to the flow). Will likely run a beta=10 case of these as well, and then run the perpendicular cases. These take about 1 day to run in Bluehive with 32 cores, I could of course try for more processors on Bluestreak or Kraken to get all the low res runs done/tested before moving to high res… Maybe I will do that.
Working on finishing up BE paper edits this week.
Science Meeting Update 03/18/2014 - Eddie
I've been working on analyzing the emission maps from my 2.5D pulsed jets, but I'm not having much luck. The emission does not seem to be as well-behaved as we had hoped. Below is an image of the Halpha for one of the internal working surfaces:
The Halpha seems to peak outside the jet radius instead of at the jet center. This doesn't directly contradict what we were saying in the paper, but it makes the analysis that we want to do impossible. I'm referring to the full width half max idea.
So I started thinking that maybe there could be an error in the emission map projection that Jonathan and I implemented. Unfortunately, the chombos do not contain any emission data, so I do not have the raw data from before it gets projected. The best I could do for now is to look at densities and temperature.
Neutral H:
Temperature:
Based on the H density, you might think that Halpha emission peaks at the center, but the temperature says the Halpha emission should peak outside the jet radius. So it's very hard to tell what the emission is supposed to look like.
UPDATE
I think I figured out how to get the raw data for the emission. Here is what I got for Halpha:
Interaction between outflow object and magnetic field - Marvin
I show here a simulation of the CND with central outflow object and with an initial vertical magnetic field of 0.1 milligauss. Initially I remove the magnetic field from a cylindrical region around the center.
The following animation shows the face-on view of the inverse beta-parameter with streamlines of the magnetic field lines in black. The small light blue circle indicates the outflow object, the large light blue circle the initial inner rim of the accretion disk. At first the area that is occupied by the outflow object stays field free, but after some time a magnetic field develops (or moves inwards). How is this possible? The magnetic field is frozen into the gas, so it should never be possible for the field to enter the outflow object. Unfortunately I don't have data about the divergence of the magnetic field, so I can not yet tell if the magnetic field moves there or is created there.
Animation of the inverse beta parameter, inner 4 pc of the disk
Simulations with a small outflow object - Marvin
In my previous post I have shown that the inner rim of the CND is stable when a resolution level of 4 is used (0.04 pc). I show here a similar simulation, but now with a central outflow object. The following animaion shows the surface density of the disk, the small black circle marks the outflow object, the large black circle the initial inner rim of the accretion disk. Surprisingly the disk's inner rim moves inwards until it hits the outflow object. Maybe the large velocity gradients/fluctuations increase the numerical viscosity in these simulations?
Trac wiki links refresher -- from Rich
Rich suggests us to use attachment instead of absolute URLs to create links in documents. So instead of things like
http://astrobear.pas.rochester.edu/trac/attachment/wiki/u/ehansen/Bvec_movie.gif
it's better to use
[attachment:Bvec_movie.gif:wiki:u/ehansen]
This dynamic way also has a convenient direct download link next to the file attachment link.
Rich found a workaround to make those old posts which use the former way still work but to be on the safe side we should start using the dynamic way to do the links.
Here's Rich's original email:
Hi Baowei, and folks: For what it is worth… I would take the time to read the Trac Links page here: http://trac.edgewall.org/wiki/TracLinks It provides very helpful information on creating links in documents you create on the blog, wiki, etc that are *dynamic* rather than hardcoded, absolute URLS (e.g. http://astrobear.pas.rochester.edu/trac/attachment/wiki/u/ehansen/Bvec_movie.gif) Taking this as our example, say we wanted to link to an attachment on another wiki page in a blog post. *** The incorrect way of doing this would be: [http://astrobear.pas.rochester.edu/trac/attachment/wiki/u/ehansen/Bvec_movie.gif Eddie's Bvec Movie] *** The correct way would be: [attachment:Bvec_movie.gif:wiki:u/ehansen] Where: * 'attachment:' is a keyword indicating you are referencing a Wiki file attachment * 'wiki' is a keyword referring to the wiki module of Trac * 'Bvec_movie.gif' is the literal referring to the filename of the attachment * 'u/ehansen/' is the wiki page containing this attachment. Do NOT LEAD OR END this reference with a "/", i.e "/u/ehansen/" is incorrect. Your links at this point will be created automatically and correctly and even include a handy 'direct download' attachment link in the page next to the file attachment link. If anything changes on the server, which is what seems to have happened today, your links are broken. I have placed a workaround redirect to fix those broken links. Still, I highly recommend you all follow the Trac best-practices for making links. Rich
SuperMIC Vs. Stampede
SuperMIC | Stampede | |
Computing Nodes | 360 | 6400 |
Processor | Each computing node has two 2.8GHz 10-Core Ivy Bridge-EP E5-2680 Xeon Processors | Each computing node has two 2.7 GHz 8-core Xeon E5-2680 (Sandy Bridge) processors |
Co-Processors | Each computing node has two Intel Xeon Phi 7120P 61-core Coprocessors(1.238GHz,16GB) | Each compute node is configured with an Intel Xeon Phi 5110p 61-core coprocessor(1.05GHz,8GB) |
Memory | 64GB DDR3 1866MHz Ram | 32GB DDR3 1600MHz Ram , with an additional 8GB of memory on each Xeon Phi coprocessor card |
Hybrid Compute Nodes | 20 Hybrid nodes, each with two Processors + one Coprocessor + One NVIDIA Tesla K20X 6GB GPU | 128 compute nodes with NVIDIA Kepler K20 5GB GPU |
Binary progress
I learnt some wind theory and AGB knowledge. I also did some modification to the program, but the program has some weird problems.
XSEDE Proposal Writing Webinar
Summary of the XSEDE Webinar "Writing a Successful XSEDE Allocation Proposal" I attended last week
- The full recorded session can be found here: https://meeting.austin.utexas.edu/p3pmvkq0mjg/ .
- Questions I asked and the speaker's answer:
- Research collaborations (Typically how many SUs applied/how many SUs awarded) — Research Collaborations are those large projects with multi-PIs. Sites standard. Typically 15~16 million. Currently the total of all research request is about 800 per year, 4.0 billion SUs per year and 1.8 billion awarded.
- is it better to submit a big proposal asking a lot SUs or several smaller proposals each of which asking small amount of SUs? One PI is not allowed to apply with different projects as PI. Recommend to combine different projects from the same group to be one. — sounds like big proposal is OK?
- Is there a way we can run scaling-testing for our own code on these new machines? Transfer SUs. Some of the machines are very similar, So you don't have to do scaling testings on all of them. For example SuperMIC ( newest supercomputer funded by NSF, located in LSU, will be in production in April 1st 2014) is similar to Stampede.
- Important points I catch and we might miss before
- Justification of SUs: clear simple calculation, log/simple wall time?
- local compute resources in details: referees may know some of your big machines.
- research team in details: how may faculties, staffs, postdocs, graduate, undergraduate students. Ability to complete the plan.
- Publications acknowledging XSEDE and/or feature stories on XSEDE website: productive, are PI, Co-PIs publish together?
- There are groups that are awarded 90% of their request.
- Ranch (TACC) and XWFS (The XSEDE-Wide File System) can be requested for storage resources without need to request computing at the same time.
Timescales for Planetary Simulations
Time Scale | Equation | In the planet | In the ambient |
Sound Crossing Time | 1.5 | 1e-3 | |
Orbital Time | 2.2 | ||
Radiation Diffusion Time | 2.3e-8 | 9.2e5 | |
Radiation Equilibrium Time | 3.8e-17 | 5.1e-4 | |
Light crossing time | 1.1e-5 |
Numerical Viscosity - Marvin
I did some simulations with different resolutions to investigate the effects of numerical viscosity at the inner rim of the accretion disk. The simulations show the surface density of the disk and are without magnetic field and outflow. The black circle shows the initial inner rim of the accretion disk (1 pc). I run these simulations for about 10 orbital timescales (at 1 pc).
Animation of the Surface Density, inner 6 pc of the disk, level 2
Animation of the Surface Density, inner 6 pc of the disk, level 4
Animation of the Surface Density, inner 6 pc of the disk, level 5
The first simulation has a resolution of 0.16 pc (level 2) at the location of the inner rim and 0.04 pc (level 4) around the center (approx. the inner 0.2 pc). The disk material moves inwards quickly, probably due to numerical viscosity. At the end of the simulation the inner cavity has almost vanished.
In the second simulation the resolution at the location of the inner rim has changed to 0.04 pc (level 4). The inner rim still moves inwards, but only from 1 pc to 0.8 pc. We also have to consider that the disk and its inner rim have to relax from its initial conditions, as there are large density and pressure gradients at the beginning of the simulation.
The third simulation has a resolution of 0.02 pc (level 5) everywhere in the disk. This looks slightly better than the previous simulation, but qualitatively there is no big difference.
So I think level 4 is an adequate resolution to avoid too large effects due to numerical viscosity.
Surface Density, inner 6 pc of the disk, level 2:
Surface Density, inner 6 pc of the disk, level 4:
Surface Density, inner 6 pc of the disk, level 5:
Notes from today's meeting with Jim Kasting
Background
- Habitable zone limited near by by runaway greenhouse at 1 AU and limited far away by the Maximum CO2-H2O greenhouse effect at 1.8 AU
- With H2 green house gas you can extend out to 10 AU (3 M_earth with 40 bars of H2 atm)
- Homopause (turbopause) at 100 km - where turbulent mixing ceases and atmosphere is not well mixed. Transport of hydrogen governed by molecular diffusion
- Exopause at 500 km - where gas becomes collisionless and fast moving hydrogen particles can escape. Velocity distribution no longer Boltzmann. Rate governed by Jeans evaporation rate.
- On Earth, exopause is fairly warm - and Jeans evaporation is quick - so H2 loss rate controlled by diffusion through the Homopause
- On early Mars, the exopause is colder and therefore Jeans evaporation is slower and might have been responsible for regulating H2 loss. Going from 1D to 2D reduced jeans evaporation rate by ~4 which could increase the steady state H2 concentration to the 20% needed to create enough of a green house effect to place early Mars in the habitable zone - and explain the fluvial features on Mars.
Goal
Calculate an upper limit on the escape rate from the Exopause and show that it is lower than the diffusion limit and hopefully consistent with the 20% concentration necessary to explain the fluvial features on Mars
Complexities of the physical model
- Geometry - Obliquity, magnetic fields,
- Multi-species
- Chemistry
- diffusion
- non-Lte
- heating and cooling
- multi-line
- collisionless - moment equations from the boltzmann equation
Possible simulations
- 1D - 2 fluid with most of the physics (apply an approximation to model the moments of the velocity distribution where it becomes collisionless) to demonstrate the h2 loss rate < diffusion limit
- Evenly spaced in log altitude (not radius)?
- 3 points per pressure scale height
- multi-fluid?
- diffusion
- line transfer - would need multi-bin line transfer
- cooling
- equation of state
- Redo Stone and Proga in 3D and measure the Jeans evaporation rate.
- MHD
- rotation
Science Meeting Update 03/03/2014 - Eddie
- I got a low resolution 3-D mach stem simulation to run on bluestreak, and now I'm running a higher resolution set up with periodic boundary conditions. If this works fine, then I'll move to kraken and do my production runs.
- I ran a 2.5-D pulsed jet with random velocity pulses. It looks more like real observations because you get more shocks and clumps close to the source. The density images are being made right now, and I'll do the emission images later.
- Following Pat's comments, I've gone through the abstract and intro of my paper. I still have a lot of work to do on the figures and emission line analysis.
- I'm reading through a couple of the papers that Pat pointed me to last week. Namely the 2007 paper that has the random velocity pulses, and the 1993 paper that he did with Raymond which was on emission from 1-D models of pulsed flows.
Meeting Update 03/03/2014 -- Baowei
- Tickets
- new: 16 tickets from Jonathan(355-370). 13of them are for AstroBEAR3.0 and have got assigned.
- closed: none
- Users
- worked with Visitor from Rice on his own module: with ambient and clump objects and added shock. compiled and ran OK. Talked about 3D cylindrical clump and tracers. Talked about computational resources
- Ablative RT: got positive response from LLE but still waiting for the confirmation in detail.
- Resources
- got a call from the director of User Service at TACC when looking for a person to contact about the XSEDE proposals. Found two possible candidates to speak with.
- Worked on
- Testing script: worked on Eddie's new testing script with overlay object
- Parallel hdf5
- Science
- reading articles about stability behavior of the front (Piriz and Tahir, 2013, etc.)
Fun with Approximate Solvers pt 1
Martin was unable to meet with me last week, but gave me some homework (reading through the current iteration of the module and coming up with questions) that I am working on now.
I spent some quality time with Toro the last week and burned through Chapter 9 and most of Chapter 10, which included the following approximate solvers:
- Primitive Variable Riemann Solver
- Two Rarefaction Riemann Solver
- Two Shock Riemann Solver
- Adaptive Iterative Riemann Solver
- Adaptive Non-iterative Riemann Solver
- HLL Solver
- HLLC Solver
The exact solver that I wrote was entirely self contained, but I hadn't anticipated the new solvers I'd be learning about following essentially the same procedure (although in retrospect, it seems obvious). Before running the tests in Chapter 9 using the AIRS and ANRS, I want to rework my code to be a little more flexible. The main code will look something like:
1) Run initialize grid script, which will return the initial state.
2) Update state, this function will choose which solver to use based on user input or adaptive conditions from a switch.
3) Print data
4) Loop steps 2 and (optionally) 3 until final time step.
This basic framework should make it easier to continue to add solvers as I learn about them. Incidentally, I now have a better understanding of why AstroBear is set up the way it is! I should have plots and tests up before next meeting.
Question: Can I continue writing in Python, or is it going to become a hindrance when I get farther along in the book? Also, when do I move past 1D?