Czarships/ExternalResources/RenewCollidingFlow: XSEDE_proposal_04152014.tex

File XSEDE_proposal_04152014.tex, 27.7 KB (added by Baowei Liu, 11 years ago)

Main Document to be updated

Line 
1 %\documentclass{article}
2\documentclass[preprint2]{aastex}
3\usepackage{ctable}
4\shorttitle{Study of Colliding Flows and Feedback in Star Formation}
5\shortauthors{Frank, Carroll-Nellenback, Liu, Huarte-Espinosa, \& Nordhaus}
6\providecommand{\e}[1]{\ensuremath{\times 10^{#1}}}
7\begin{document}
8\title{Study of Colliding Flows and Feedback in Star Formation}
9\author{Adam Frank, Jonathan Carroll-Nellenback, Baowei Liu, Mart\'{i}n Huarte-Espinosa}
10%\usepackage{amssymb,amsmath}
11%\usepackage{verbatim}
12%\usepackage{graphicx}
13%\usepackage[utf8]{inputenc}
14%\usepackage[T1]{fontenc}
15%\usepackage[square, comma, numbers]{natbib}
16
17
18
19%\begin{document}
20
21%\title{Study of Colliding Flows and Feedback in Star Formation}
22%\author{Adam Frank, Jonathan J. Carroll-Nellenback, Baowei Liu, \\Mart\'{i}n Huarte-Espinosa, Jason Nordhaus}
23
24%\maketitle
25
26\begin{abstract}
27
28-- need update with TSF !!
29
30Observational evidence from local star-forming regions mandates that star formation occurs shortly after, or
31even during, molecular cloud formation. Models of the formation of molecular clouds in large-scale converging
32flows have identified the physical mechanisms driving the necessary rapid fragmentation. They also point to global
33gravitational collapse driving supersonic turbulence in molecular clouds. Previous cloud formation models
34have focused on turbulence generation, gravitational collapse, magnetic fields, and feedback. In this work we propose exploring
35the effect of structure in the flow on the resulting clouds and the ensuing gravitational collapse. We seek to compare two
36situations with different flow properties and processes (flow shear, geometry, magnetic fields, feedback).
37We have already found that structured inflows lead to a delay of local gravitational collapse (``star formation").
38Thus, more gas has time to accumulate, eventually leading to a strong global collapse, and thus to a
39high star formation rate. Uniform inflows fragment hydrodynamically early on, leading to the rapid
40onset of local gravitational collapse and an overall low sink formation rate.
41
42Based on these results obtained with our previous start-up XSEDE allocation (TG-AST120029) we propose to to carry out a series of three-dimensional Adaptive Mesh Refinement (AMR) parallel numerical simulations to study the cloud collapse and cluster formation under various conditions. This will include colliding flows with different shear angles, different magnetic field strengths and orientation angles with respect to the flow.
43
44We are in full production mode with AstroBEAR, but the powerful numerical resources of the XSEDE that we are requesting are needed to make further progress on unraveling the mysteries of feedback process in star formation. To achieve this exciting goal we request the support of 7.1 million SU's on Kraken at NICS.
45\bigskip
46\end{abstract}
47
48
49
50\section{Introduction}
51
52-- need update with TSF!!
53
54The concept of flow-driven cloud formation \citep{Vazquez-Semadeni1995,Ballesteros-Paredes1999,Hartmann2001} can explain
55two observational constraints on how molecular clouds form stars, derived from local star-forming regions: first,
56all local molecular clouds form stars, and second, the stellar age spreads are on the order of $1-2$~Myr,
57several times shorter than cloud crossing times
58\citep[see summary in ] {Hartmann2001,Ballesteros-ParedesHartmann2007}.
59The first constraint suggests that star formation sets in immediately (or even during) molecular cloud formation,
60and that the second constraint is trivially fulfilled in a scenario where the clouds themselves form in large-scale
61``converging" flows. The immediate (``rapid") onset of star formation in the forming clouds and the fact that the
62star formation efficiency is only a few percent \citep{Evansetal2009} mandates that the clouds are highly structured:
63local collapse must set in before global collapse can overwhelm the dynamics.
64
65
66
67
68
69The notion of cloud formation in converging flows has led to a series of numerical experiments investigating the
70physical processes relevant for the rapid fragmentation and for the control of the star formation efficiency.
71There is agreement across the models (despite different initial and boundary conditions) on the following results:
72(1) Rapid fragmentation is induced by strong radiative losses during the flow collision (possibly even by thermal
73instability if the clouds form from atomic gas), and by dynamical instabilities
74\citep{Hueckstaedt2003,Audit2005,Vazquez-Semadeni2006,Heitsch2008b}
75(2) Turbulence in molecular clouds is a natural result of the dynamical instabilities during
76the cloud formation, and is driven by global gravitational collapse at later stages of the cloud evolution
77\citep{Vazquez-Semadeni2007}. Some evidence for this is seen in the filamentary nature of some clouds (Fig 1). (3) Strong, non-linear density contrasts can also be driven by
78self-gravity in finite clouds, due to geometry (or ``edge") effects \citep{BurkertHartmann2004}.
79(4) Although the rapid fragmentation can keep the star formation efficiency low, eventually, feedback or cloud
80dispersal is needed to prevent a large percentage of the gas to participate in gravitational collapse
81\citep{Vazquez-Semadeni2010}.
82
83The most obvious difference in the simulation results is the morphology of the forming clouds. All models use some
84mechanism to break the symmetry in the colliding flows -- otherwise, the flow collision would result in a
85plane-parallel shock. Models with small-scale perturbations (``noise") in the velocities tend to lead to extremely
86flattened clouds with a strong ring due to the gravitational edge effect
87\citep{BurkertHartmann2004,HartmannBurkert2007} in finite, sheet-like clouds. If
88the velocity perturbations are imposed on larger scales (e.g as a turbulent power spectrum), or
89if the collision interface between the two flows is perturbed, turbulent structures form that are not
90necessarily coherent when viewed from different directions \citep{Heitsch2009}.
91
92\section{Previous Accomplishments} -- new section for the renewal !!
93
94\subsection{Summary of Scientific Discoveries}
95\subsection{Details of Previous Accomplishments}
96
97\section{Accomplishments and the Way Forward}
98
99-- From the old proposal, need update with TSF or tailor to the section above ---
100
101\begin{figure}[t!]
102\includegraphics[width=\columnwidth]{Panels.png}
103\caption{Comparison of ``Clumpy'' run (top) and ``Smooth'' run (bottom) showing the projected column density along the flow direction (right) and perpendicular to the flow direction (left)}
104\label{fig:panels}
105\end{figure}
106
107To understand better the effect of initial conditions on the clouds forming in the flow collisions, we have already completed a study \citep{Carrolletal13} comparing two (otherwise identical) cloud formation simulations, one with a smooth inflow, and one with a clumpy
108inflow of identical mass flux. Our studies were motivated by the analysis of \citet{Pringle2001}, suggesting
109that cloud formation out of warm atomic gas would require time and length scales too large to be feasible (see also \citet{McKeeOstriker}). This problem is solved by the realization that the flow collision is three-dimensional,
110allowing gravitational collapse and accretion along the dimensions perpendicular to the flows, and thus circumventing
111the one-dimensional limit on column density accumulation \citep{Heitsch2008a}. Yet, \citet{Pringle2001} suggest that clumpy flows
112could reduce the accumulation and molecule formation time scale (especially if the clumps are already molecular).
113%In this proposal, we will study how hat effects of clumpy flow and smooth flows under a host of conditions including the presence of magnetic fields.
114
115%In our previous work \citep{Carrolletal13} we presented the results of two simulations flow-driven molecular cloud formation.
116The models were identical
117except for the physical conditions of the inflowing gas. One model (``Smooth") began with a completely uniform
118flow. The other model (``Clumpy") used the same mass inflow rate, but with the gas distributed in dense clumplets with a filling factor of $5$\%. In both cases, the thermal and dynamical fragmentation of the shocked collision region leads to turbulence, at
119a level of $10$ to $20$\% of the inflowing kinetic energy. This is consistent with earlier results. The Clumpy
120run showed a somewhat higher velocity dispersion initially, since the clumps are less easily stopped in a
121flow collision.(Fig 2) %(see \S\ref{morphologies}). (Fig 2)
122
123
124Due to the lower compression factor in the Clumpy model, less gas is being cooled to higher densities than in the
125Smooth run. Thus, the kinetic energy of the inflow is less efficiently dissipated. Together with a non-contiguous
126distribution of cold, sub-jeans, fragments, this leads to a suppression of local collapse for nearly $20$~Myr after
127the initial flow collision. At that point, sufficient mass has assembled to induce global collapse of the whole
128region, resulting in a ``star burst" (more appropriately, ``sink burst") at a high sink formation rate.
129In contrast, the Smooth run shows local collapse already after $10$~Myr, at less than half the sink formation
130rate of the Clumpy run. Due to the local nature of the thermal fragmentation, more fragmentation will occur
131with increasing resolution \citep{Hennebelle2007}, thus, the times quoted here are {\em upper} limits for the onset
132of local collapse. Nevertheless, structured flows can delay the onset of (substantial) local collapse.
133Global collapse is only clearly visible in the Clumpy run.
134
135The differences between Clumpy and Smooth inflows extend to the mixing efficiencies. Somewhat counter to
136a naive expectation, the Smooth initial conditions result in less well mixed material (and cores). This is
137primarily due to the Nonlinear Thin Shell Instability (NTSI) funneling material preferentially into the troughs located far into the opposing
138inflow. For the Clumpy run, the global collapse of the accumulated clumps erases all memory of the initial
139inflow direction.
140
141In \citep{Carrolletal13} we chose two extremes as our initial conditions. It is more likely that the inflows themselves
142will contain turbulent velocity and density structures that are coherent in space. Spatial coherence leads to
143stronger shocks in the collision region, and thus to more efficient energy dissipation. In that sense, our Clumpy
144run is overestimating the effect of structured inflows. In addition magnetic fields are likely to play a strong role in shaping the evolution of the flows. The challenge ahead is to use a wider array of initial conditions and include more physical processes (such as magnetic fields) to explore the evolution of colliding flows and compare predictions from these cases with observations
145
146
147%\begin{figure}[t!]
148%\begin{center}
149%\includegraphics[width=.99\columnwidth]{3DSimulation.png}%{3DSimulations of clumpy and smooth flows.png}
150%\includegraphics[bb=0 0 511 512]{3DSimulation.png}%{3DSimulations of clumpy and smooth flows.png}
151%\caption{This may not be the one. Jonathan need caption}
152%\label{}
153%\end{center}
154%\end{figure}
155
156\section{Research Objectives}
157
158\subsection{Colliding Flows}
159
160\subsection{Triggering Star Formation}
161
162--- the follow is from the old proposal --
163We will focus on simulations and observational implications of colliding flows under a variety of conditions. This will include (1) colliding flows interacting with various degrees of shear and (2) colliding flows with different magnetic field strengths (3) colliding flows with different magnetic field orientations. Our computations will be carried out using AstroBEAR.
164
165\begin{figure}[t!]
166\includegraphics[width=\columnwidth]{AMR2.png}
167\caption{Slice showing log density from 3D Colliding Flows simulation performed with AstroBEAR. Also shown is the AMR mesh which contains 4 additional levels of refinement}
168\label{fig:collapse}
169\end{figure}
170
171
172\section{Computational Approach} -- will update with new scaling test !!
173
174AstroBEAR is an Adaptive Mesh Refinement (AMR), multi-physics code for astrophysics. AMR remains at the cutting edge of computational astrophysics. AMR simulations adaptively change resolution within a computational domain to ensure that the most important features of the dynamics are simulated with highest accuracy. By allowing quiescent regions to evolve with low resolution, AMR simulations achieve order of magnitude increases in computational speed. After a decade of development only a handful of AMR-MHD codes exist for astrophysics: (e.g. FLASH, ENZO RAMSES, ORION, CASTRO).
175
176The UR astrophysics group successfully constructed and tested AstroBEAR, a fully parallelized, multi-dimensional AMR MHD code. The success of this effort is evidenced both in the code's completion (Cunningham et al 2009) and the papers published using AstroBEAR as it was developed through its radiation-hydrodynamic and MHD versions (a partial list includes: Poludnenko et al 2004ab; Cunningham et al 2005; 2006ab, Hartigan et al 2007, Dennis et al 2008, Yirak 2009, 2010, Li et al 2012, Huarte-Espinosa et al 2012).
177
178The multi-physics capabilities of AstroBEAR have been significantly expanded by including solvers for elliptic and parabolic equations. Adapting the linear system solver HYPRE, we now routinely simulate systems in which self-gravity, heat conduction and magnetic resistivity are important. Radiation transfer in the diffusive limit is currently being added. In addition, AstroBEAR can treat gravitationally interacting point particles which accrete mass.
179
180\subsection{AstroBEAR Scaling}\label{subsec:scaling}
181AstroBEAR is designed for 2D and 3D adaptive mesh refinement (AMR) simulations which require algorithms that are highly parallelized and manage memory efficiently. AstroBEAR uses a hierarchical approach to parallelization suitable for multicore architectures in which large-scale patches of data are distributed to nodes using MPI and the work for an individual patch is distributed across the cores on a node using OpenMP directives. AstroBEAR also employs new techniques such as load balancing by threading the grid advances on each level with preference going to the finer level grids.
182
183Here we present strong scaling results for AstroBEAR. In Figure \ref{fig:OnKraken}, we report scaling test results on Kraken at NICS. Each compute node of Kraken has two six-core AMD Opterons, so we use $120$, $240$ and up to $4992$ cores. The resolution we used for these test are $128^{3} + 4$ level AMR which is same as the computation we are planning to do. The strong scaling test plot of the current code shows a slope $-0.632$ (Figure \ref{fig:OnKraken} (a)) while the slope for perfect scaling is $-1$. This shows AstroBEAR has an excellent scaling on Kraken. All AMR codes have redundant calculations coming from ghost zones. The redundant calculations portion gets bigger as the refinement zones gets smaller. Ideally if we get rid of all redundant calculations, the AstroBEAR scaling will be a straight with slope $-0.885$ (Figure \ref{fig:OnKraken} (b)). We are trying to get closer to this slope by optimizing the code and decreasing the redundant calculations.
184
185\begin{figure}[t!]
186\includegraphics[width=.5 \textwidth]{strongScalingOnKraken5000.pdf}
187\caption{Strong scaling behavior of AstroBEAR with $128^{3} + 4 $ AMR levels of resolution on Kraken at NICS. Running time is plotted versus the number of cores in log scale. (a) shows the scaling test result of current revision of AstroBEAR. It has a slope $-0.632$ which shows excellent scaling. (b) is the efficiency if we consider redundant ghost zone computations as necessary... Or the hypothetical scaling if we modify the code to avoid these redundant computations. Or the results of a weak scaling test in which the growth in redundant computations is balanced by a growth in the overall number of computations. In any event, it demonstrates that the AMR overhead with respect to communication/regridding etc... is minimal. Compared to a fixed grid run of equivalent resolution, the AMR wall time is 100's of times less even with the less than perfect scaling.}
188\label{fig:OnKraken}
189\end{figure}
190
191\section{Resource Request} -- will update with the table on the wiki !!!!
192
193Using AstroBEAR, we have found very interesting new results related to star formation - in particular studies of feedback (Quillen et al 2005, Cunningham et al 2009, Carroll et al 2009, 2010) as well as molecular cloud formation (Carroll et al 2013). We plan to carry out a series of three-dimensional numerical simulations of colliding flows with different shear angles, magnetic field strengths, and magnetic field orientations to study the star-forming properties of a cluster. In particular this will include colliding flows with $\beta=\infty$ and shear angles $0^{\circ}$, $30^{\circ}$ and $60^{\circ}$ as well as colliding flows with $\beta=1$ and $\beta=10$ and shear angles of $0^{\circ}$, and $60^{\circ}$, as well as colliding flows with $\beta=1$, a shear angle of $0^{\circ}$, and magnetic field orientation angles of $0^{\circ}$ and $60^{\circ}$. See Table \ref{tab:flows} This will allow us to address the interplay between magnetic field strength, orientation, and shear of the colliding flows with respect to the resulting properties of the molecular clouds as well as the cores that form within them.
194AstroBEAR is presently performing well with multiple 6+ levels of AMR refinement. The tractability of a given run then becomes more a question of the number of needed cell updates, which is mainly determined by the fraction of the volume where mesh refinement is employed. And the volume filling fraction depends on the specific problem/simulation. The computing resources we require are based on our previous runs on XSEDE machines. For example, Our hydro calculation with $0^{\circ}$ shear and with resolution $128^3 + 4$ AMR took about 6,000 SU's for 1 frame. In Tables \ref{tab:flows} we summary the computing resources we require. In total we require 7.1 million CPU-hours, 99.3\% of which will be used for production runs and 0.7\% for testing runs and continue development of our code. We can use anywhere from 500-5000 cores for a typical production run, depending on queue limitations.
195
196-- need introduction to local resources and our whole team !!!
197-- need justification why we choose Stampede and SuperMIC !!!
198
199\subsection{I/O Requirements, Analysis, and Storage}
200For each of the seven runs of our simulation, we expect to save 150 frames of data with size 1-5GB for each frame. So the total data size for our colliding flows project is about 850 GB - 4.5 TB. In total we expect to need $\sim$5 TB of storage on Kraken. Most of the analysis will be done using parallelized tools built into the AstroBEAR package that can be performed at run time or in post processing.
201
202\subsection{Financial Support}
203Financial support for this project will come from the Space Telescope Sci Institute grant HST -AR-12128.01-A entitled ``STSci - Hubble Telescope - The Reel Deal: Interpreting HST Multi-Epoch Movies of YSO JetsSpace'' (PI, A. Frank; 10/1/2010 - 9/30/2013) and the Department of Energy grant DE-SC0001063 entitled ``The dynamics of magnetized Astrophysical Jets through Pulsed Power HEDP lab Studies'' (PI, A. Frank; 8/15/2012 - 8/14/2015) and the National Science Foundation, NSF AST-1109285 entitled ``From Central Engine to Bipolar Outflow: Binaries, MHD and the Evolution of Planetary Nebulae'' (PI, A. Frank; 9/1/2011 - 8/31/2014). DOE, Award no. R17081, entitled ``Rice - Clumpy Environments \& Interacting Shock Waves: Realistic Laboratory Analogs of Astrophysical Flows'', (PI: A. Frank, 2/22/2011 - 2/21/2014) and the Space Telescope Sci Institute grant HST-AR-12832.01-A entitled ``Hubble Telescope Cycle 20 - Climbing the Ladder of Start Formation Feedback``, (PI A. Frank, 11/1/2012 - 10/31/2015).
204
205\section{Summary}
206With the advent of our efficient 3D AMR MHD code AstroBEAR and state of the art HPC facilities, we are on the doorstep of significant breakthroughs in understanding the physics of the feedback process in the clustered star formation. With our previous start-up XSEDE allocation (TG-AST120029), we have already found and are in the process of publishing very important results. We are in full production mode with AstroBEAR, but the powerful numerical resources of the XSEDE that we are requesting are needed to make further progress on unraveling the mysteries of interacting binaries.
207
208If granted this allocation, we will perform a series of 3D Adaptive Mess Refinement parallel numerical simulations with higher resolution and longer time-scales than ever before to study the formation, structure and stability and observational implications of inter-acting stellar binaries.
209\begin{itemize}
210\item Study of colliding flows with/without MHD 7,050,000 SU's; 99.3\% of total request.
211\item Testing and continue development of our code. 50,000 SU's; 0.7\% of total request.
212\end{itemize}
213
214To achieve these goals, we request 7,100,000 SU's on the NICS Cray XT5 (Kraken).
215%\bibliographystyle{plain}
216%\bibliography{refs}
217
218\begin{thebibliography}{}
219%
220\bibitem[Audit \& Hennebelle(2005)]{Audit2005} Audit, E., \& Hennebelle, P. \ 2005, \aap, 433, 1
221\bibitem[Bally, J. 2011a]{bally11a}
222Bally, J., 2011, Computational Star Formation, 270, 247
223\bibitem[Bally, J., Cunningham, N.J., Moeckel, N., et al. 2011]{bally11b}
224John Bally, Nathaniel J. Cunningham, Nickolas Moeckel, Michael G. Burton, Nathan Smith, Adam Frank, and Ake Nordlund, 2011, \apj, 727, 113
225\bibitem[Reipurth, B. and Bally, J.2001]{bally01}
226Bo Reipurth and John Bally, 2001, ARAA, 39, 403
227\bibitem[Bally, J., Licht, D.,Smith, N., & Walawender, J., 2006]{bally06}
228Bally, J., Licht, D.,Smith, N., \& Walawender, J., 2006, AJ, 131, 473
229\bibitem[Bally, J., Reipurth, B., and Davis, C., 2006]{bally06b}
230Bally, J., Reipurth, B., and Davis, C., 2006, Protostars and Planets V, University of Arizona Press, Tucson, in press
231\bibitem[Ballesteros-Paredes et al.(1999)]{Ballesteros-Paredes1999} Ballesteros-Paredes, J., Hartmann, L., \& V{\'a}zquez-Semadeni, E. \ 1999, \apj, 527, 285
232\bibitem[Ballesteros-Paredes \& Hartmann(2007)]{Ballesteros-ParedesHartmann2007} Ballesteros-Paredes, J., \& Hartmann. L. \ 2007, Revista Mexicana de Astronomia y Astrofisica, 43, 123
233\bibitem[Burkert \& Hartmann(2004)]{BurkertHartmann2004} Burkert, A., \& Hartmann, L. \ 2004, \apj, 616, 288
234%\bibitem[Ballesteros-Parades, Hartmann & Vazquez-Seandeni, 1999]{ballesteros99}
235% Ballesteros-Parades, Hartmann \& Vazquez-Seandeni, 1999, ApJ, 527, 285
236\bibitem[Carroll et al, 2010]{carroll10}
237Carroll, J.J., Frank, A., \& Blackman, E.G. 2010, \apj, 722, 145
238\bibitem[Carroll et al, 2013]{Carrolletal13}
239Carroll, J.J., Frank, A., \& Heitsch, F. 2013,"The effects of Inhomogeneities within Colliding Flows on the Formation and Evolution of Molecular Clouds", in preparation
240\bibitem[Carroll et al, 2009]{carroll09}
241Carroll, J.J., Frank, A., Blackman, E.G., Cunningham, A.J.,
242\& Quillen, A.C. 2009, \apj, 695, 1376
243\bibitem[Carroll et al, 2011]{carroll11}
244Carroll, J. Shroyer, B; Frank, A; Ding, C eprint arXiv:1112.1710. 2011, submitted to JCP
245\bibitem[Cunningham et al, 2009]{cunningham09}
246 Cunningham, A.J., Frank, A., Carroll, J., Blackman, E.G., \& Quillen, A.C. 2009, \apj, 692, 816
247\bibitem[Cunningham et al, 2009b]{cunningham09b}
248Cunningham, A. Frank, A., Varniere, P., Mitran, S., \& Jones, T.W. 2009, \apjs, 182, 51
249\bibitem[Elmegreen et al, 2000]{elmegreen00}
250Elmegreen B., G.,2000, \apj, 539, 342
251\bibitem[Evans et al.(2009)]{Evansetal2009} Evans, II, N.~J., Dunham, M.~M., J{\o}rgensen, J.~K. et al. \ 2009, \apjs, 181, 321
252 \bibitem[Hartmann et al.(2001)]{Hartmann2001} Hartmann, L., Ballesteros-Paredes, J., \& Bergin, E.~A. \ 2001, \apj, 562, 852
253 \bibitem[Hartmann \& Burkert(2007)]{HartmannBurkert2007} Hartmann, L., \& Burkert, A. \ 2007, \apj, 654, 988
254 \bibitem[Heitsch et al, 2011]{heitsch11}
255Heitsch, F., Naab, T., \& Walch, S. 2011, mnras, 415, 271
256 \bibitem[Heitsch et al.(2008a)]{Heitsch2008a} Heitsch, F., Hartmann, L.~W., Slyz, A.~D., Devriendt, J.~E.~G., \& Burkert, A. \ 2008, \apj, 674, 316
257 \bibitem[Heitsch et al.(2008b)]{Heitsch2008b} Heitsch, F., Hartmann, L.~W., \& Burkert, A. \ 2008, \apj, 683, 786
258 \bibitem[Heitsch et al.(2009)]{Heitsch2009} Heitsch, F., Ballesteros-Paredes, J., \& Hartmann, L.~W. \ 2009, \apj, 704, 1735
259 \bibitem[Hennebelle \& Audit(2007)]{Hennebelle2007} Hennebelle, P., \& Audit, E. \ 2007, \aap, 465, 431
260 \bibitem[Huarte-Espinosa \& Frank(2012a)]{tenerife}
261Huarte-Espinosa, M., \& Frank, A., 2012, arXiv:1201.4322
262 \bibitem[Hueckstaedt(2003)]{Hueckstaedt2003} Hueckstaedt, R.~M. \ 2003, New Astronomy, 8, 295
263 \bibitem[Klein et al, 2006]{klein06}
264Klein, R., Inutsuka, S., Padoan, P., and Tomisaka, T , 2006, Protostars and Planets V, University of Arizona Press, Tucson
265 \bibitem[Krumholtz et al, 2005]{krumholtz05}
266Krumholtz, M., McKee, K., \& Klein, R., 2005, Nature, 438, 333
267 \bibitem[Knee et al, 2000]{knee00}
268 Knee, L.B.G., \& Sandell, G., 2000, ,aap, 361, 671
269 \bibitem[Nakamura et al, 2011]{nakamura11}
270 Nakamura, F., \& Li, Z.-Y., 2011, \apj, 740, 36
271 \bibitem[Nakamura et al, 2007]{nakamura07}
272 Nakamura, F., \& Li, Z.-Y., 2007, \apj, 662, 395
273 \bibitem[Matzner et al, 2002]{matzner02}
274Matzner, C.D., 2002, ,\apj, 566, 302
275 \bibitem[McKee \& Ostriker(2007)]{McKeeOstriker} McKee, C.~F., \& Ostriker, E.~C.\ 2007, ARA\&A, 45, 565
276 \bibitem[Moriarty-Schieven et al, 2006]{moriarty06}
277 Moriarty-Schieven, G.H., Johnstone, D., Bally, J., \& Jenness, T. 2006, \apj, 645, 357
278 \bibitem[Norman et al, 1980]{norman80}
279 Norman, C., \& Silk, J., 1980, \apj, 238, 158
280 \bibitem[Pringle et al.(2001)]{Pringle2001} Pringle, J.~E., Allen, R.~J., \& Lubow, S.~H. \ 2001, MNRAS, 327, 663
281 %\bibitem[Reipurth et al, 2010]{reipurth10}
282% Reipurth, B., Mikkola,
283% S., Connelley, M., \& Valtonen, M. 2010, ApJl, 725, L56
284 \bibitem[Quillen et al, 2005]{quillen05}
285 Quillen, A.C., Thorndike, S.L.,Cunningham, A., Frank, A., Gutermuth, R.A., Blackman, E.G., Pipher, J.L., ,\& Ridge, N., 2005, ,\apj, 632, 941
286\bibitem[Smith et al, 2010a]{smith10a}
287Smith, N., Povich, M.S., Whitney, B.A., et al. 2010, mnras, 406, 952
288\bibitem[Smith et al, 2010b]{smith10b}
289Smith, N., Bally, J., \& Walborn, N.R. 2010, mnras, 405, 1153
290\bibitem[V{\'a}zquez-Semadeni et al.(1995)]{Vazquez-Semadeni1995} V{\'a}zquez-Semadeni, E., Passot, T., \& Pouquet, A. \ 1995, \apj, 441, 702
291\bibitem[V{\'a}zquez-Semadeni et al.(2006)]{Vazquez-Semadeni2006} V{\'a}zquez-Semadeni, E., Ryu, D., Passot, T., Gonz\'alez, R.~F., \& Gazol, A.\ 2006 ApJ 643, 245
292\bibitem[V{\'a}zquez-Semadeni et al.(2007)]{Vazquez-Semadeni2007} V{\'a}zquez-Semadeni, E., G{\'o}mez, G.~.C., Jappsen, A.~K., Ballesteros-Paredes, J., Gonz{\'a}lez, R.~F., \& Klessen, R.~S. \ 2007, \apj, 657, 870
293\bibitem[V{\'a}zquez-Semadeni et al.(2010)]{Vazquez-Semadeni2010} V{\'a}zquez-Semadeni, E., Col{\'{\i}}n, P., G{\'o}mez, G.~C., Ballesteros-Paredes, J., \& Watson, A.~W. \ 2010, \apj, 715, 1302
294\bibitem[Warin et al, 1996]{warin96}
295Warin, S., Castets, A., Langer, W. D., Wilson, R. W., \& Pagani, L. 1996, A\&A, 306, 935
296\bibitem[Wang et al, 2009]{wang09}
297Wang, P., Li, Z.-Y., Abel, T., \& Nakamura, F. 2010, \apj, 709, 27
298\end{thebibliography}
299
300\begin{appendix}
301\ctable[caption = Expected CPU-hours for Colliding Flow Simulations,
302 label = tab:flows
303 ]
304 {cccccc}
305 {}
306 {
307 \FL
308 Shear& $\beta$ & Field & Resolution & Frames & Expected\\
309 Angle& & Orientation & & & SU's\\
310\ML
311 $0^{\circ}$ & $\infty$ & -- & $128^{3}+4$ & 150 & Completed \\
312\NN
313 $30^{\circ}$ & $\infty$ & -- & $128^{3}+4$ & 150 &900,000 \\
314\NN
315 $60^{\circ}$ & $\infty$ & -- & $128^{3}+4$ & 150 & 900,000 \\
316\NN
317 $0^{\circ}$ & $1$ & $0^{\circ} $ & $128^{3}+4$ & 150 & Completed \\
318\NN
319 $60^{\circ}$ & $1$ & $0^{\circ} $ & $128^{3}+4$ & 150 &1,050,000 \\
320\NN
321 $0^{\circ}$ & $1$ & $30^{\circ} $ & $128^{3}+4$ & 150 &1,050,000 \\
322\NN
323 $0^{\circ}$ & $1$ & $60^{\circ} $ & $128^{3}+4$ & 150 &1,050,000 \\
324\NN
325 $0^{\circ}$ & $10$ & $0^{\circ} $ & $128^{3}+4$ & 150 &1,050,000 \\
326\NN
327 $60^{\circ}$ & $10$ & $0^{\circ} $ & $128^{3}+4$ & 150 &1,050,000 \\
328\ML
329 Total & & & & & 7,050,000\\
330\LL
331 }
332
333\end{appendix}
334\end{document}