Czarships/ExternalResources/RenewCollidingFlow: XSEDE_proposal_BL_04102014.tex

File XSEDE_proposal_BL_04102014.tex, 36.6 KB (added by Baowei Liu, 11 years ago)
Line 
1 %\documentclass{article}
2\documentclass[preprint2]{aastex}
3
4\shorttitle{Study of Colliding Flows and Feedback in Star Formation}
5\shortauthors{Frank, Carroll-Nellenback, Liu, Huarte-Espinosa, Li, Kaminski, Hansen}
6\providecommand{\e}[1]{\ensuremath{\times 10^{#1}}}
7
8\usepackage{color}
9\usepackage{ctable}
10
11\begin{document}
12\title{{\color{red} Same Title???}Study of Colliding Flows and Feedback in Star Formation}
13\author{Adam Frank, Jonathan J. Carroll-Nellenback, Baowei Liu, Mart\'{i}n Huarte-Espinosa,\\ Shule Li, Erica L. Kaminski,Edward Hansen}
14%\usepackage{amssymb,amsmath}
15%\usepackage{verbatim}
16%\usepackage{graphicx}
17%\usepackage[utf8]{inputenc}
18%\usepackage[T1]{fontenc}
19%\usepackage[square, comma, numbers]{natbib}
20
21
22%\begin{document}
23
24%\title{Study of Colliding Flows and Feedback in Star Formation}
25%\author{Adam Frank, Jonathan J. Carroll-Nellenback, Baowei Liu, \\Mart\'{i}n Huarte-Espinosa, Jason Nordhaus}
26
27%\maketitle
28
29\begin{abstract}
30
31{\color{red} The following two paragraphs need to be updated with TSF stuff}
32
33
34[---
35
36Observational evidence from local star-forming regions mandates that star formation occurs shortly after, or
37even during, molecular cloud formation. Models of the formation of molecular clouds in large-scale converging
38flows have identified the physical mechanisms driving the necessary rapid fragmentation. They also point to global
39gravitational collapse driving supersonic turbulence in molecular clouds. Previous cloud formation models
40have focused on turbulence generation, gravitational collapse, magnetic fields, and feedback. In this work we propose exploring
41the effect of structure in the flow on the resulting clouds and the ensuing gravitational collapse. We seek to compare two
42situations with different flow properties and processes (flow shear, geometry, magnetic fields, feedback).
43We have already found that structured inflows lead to a delay of local gravitational collapse (``star formation").
44Thus, more gas has time to accumulate, eventually leading to a strong global collapse, and thus to a
45high star formation rate. Uniform inflows fragment hydrodynamically early on, leading to the rapid
46onset of local gravitational collapse and an overall low sink formation rate.
47
48Based on these results obtained with our previous XSEDE allocations (TG-AST120029) we propose to to carry out a series of three-dimensional Adaptive Mesh Refinement (AMR) parallel numerical simulations to study the cloud collapse and cluster formation under various conditions. This will include colliding flows with different shear angles, different magnetic field strengths and orientation angles with respect to the flow.
49
50---]
51
52We are in full production mode with AstroBEAR, but the powerful numerical resources of the XSEDE that we are requesting are needed to make further progress on unraveling the mysteries of feedback process in star formation. To achieve this exciting goal we request the support of {\color{green} 4.0 million SU's} on Stampede at TACC, {\color{green} 2.1 million SU's} on Gordon at SDSC and {\color{green} 2.0 million SU's} on SuperMIC at LSU.
53\bigskip
54\end{abstract}
55
56
57
58\section{Introduction}
59
60{\color{red} This section needs to be updated with TSF!!}
61
62[----
63
64The concept of flow-driven cloud formation \citep{Vazquez-Semadeni1995,Ballesteros-Paredes1999,Hartmann2001} can explain
65two observational constraints on how molecular clouds form stars, derived from local star-forming regions: first,
66all local molecular clouds form stars, and second, the stellar age spreads are on the order of $1-2$~Myr,
67several times shorter than cloud crossing times
68\citep[see summary in ] {Hartmann2001,Ballesteros-ParedesHartmann2007}.
69The first constraint suggests that star formation sets in immediately (or even during) molecular cloud formation,
70and that the second constraint is trivially fulfilled in a scenario where the clouds themselves form in large-scale
71``converging" flows. The immediate (``rapid") onset of star formation in the forming clouds and the fact that the
72star formation efficiency is only a few percent \citep{Evansetal2009} mandates that the clouds are highly structured:
73local collapse must set in before global collapse can overwhelm the dynamics.
74
75
76The notion of cloud formation in converging flows has led to a series of numerical experiments investigating the
77physical processes relevant for the rapid fragmentation and for the control of the star formation efficiency.
78There is agreement across the models (despite different initial and boundary conditions) on the following results:
79(1) Rapid fragmentation is induced by strong radiative losses during the flow collision (possibly even by thermal
80instability if the clouds form from atomic gas), and by dynamical instabilities
81\citep{Hueckstaedt2003,Audit2005,Vazquez-Semadeni2006,Heitsch2008b}
82(2) Turbulence in molecular clouds is a natural result of the dynamical instabilities during
83the cloud formation, and is driven by global gravitational collapse at later stages of the cloud evolution
84\citep{Vazquez-Semadeni2007}. Some evidence for this is seen in the filamentary nature of some clouds (Fig 1). (3) Strong, non-linear density contrasts can also be driven by
85self-gravity in finite clouds, due to geometry (or ``edge") effects \citep{BurkertHartmann2004}.
86(4) Although the rapid fragmentation can keep the star formation efficiency low, eventually, feedback or cloud
87dispersal is needed to prevent a large percentage of the gas to participate in gravitational collapse
88\citep{Vazquez-Semadeni2010}.
89
90The most obvious difference in the simulation results is the morphology of the forming clouds. All models use some
91mechanism to break the symmetry in the colliding flows -- otherwise, the flow collision would result in a
92plane-parallel shock. Models with small-scale perturbations (``noise") in the velocities tend to lead to extremely
93flattened clouds with a strong ring due to the gravitational edge effect
94\citep{BurkertHartmann2004,HartmannBurkert2007} in finite, sheet-like clouds. If
95the velocity perturbations are imposed on larger scales (e.g as a turbulent power spectrum), or
96if the collision interface between the two flows is perturbed, turbulent structures form that are not
97necessarily coherent when viewed from different directions \citep{Heitsch2009}.
98
99\section{Previous Accomplishments}
100{\color{red} new section required for the renewal !! }
101
102\subsection{Summary of Scientific Discoveries}
103{\color{red} new section required for the renewal !! }
104
105\subsection{Details of Previous Accomplishments}
106{\color{red} new section for the renewal !! }
107
108\section{Accomplishments and the Way Forward}
109
110{\color{red} From the old proposal, need update with TSF or tailor to the section above}
111
112\begin{figure}[ht]
113\includegraphics[width=\columnwidth]{Panels.png}
114\caption{Comparison of ``Clumpy'' run (top) and ``Smooth'' run (bottom) showing the projected column density along the flow direction (right) and perpendicular to the flow direction (left)}
115\label{fig:panels}
116\end{figure}
117
118To understand better the effect of initial conditions on the clouds forming in the flow collisions, we have already completed a study \citep{Carrolletal13} comparing two (otherwise identical) cloud formation simulations, one with a smooth inflow, and one with a clumpy
119inflow of identical mass flux. Our studies were motivated by the analysis of \citet{Pringle2001}, suggesting
120that cloud formation out of warm atomic gas would require time and length scales too large to be feasible (see also \citet{McKeeOstriker}). This problem is solved by the realization that the flow collision is three-dimensional,
121allowing gravitational collapse and accretion along the dimensions perpendicular to the flows, and thus circumventing
122the one-dimensional limit on column density accumulation \citep{Heitsch2008a}. Yet, \citet{Pringle2001} suggest that clumpy flows
123could reduce the accumulation and molecule formation time scale (especially if the clumps are already molecular).
124%In this proposal, we will study how hat effects of clumpy flow and smooth flows under a host of conditions including the presence of magnetic fields.
125
126%In our previous work \citep{Carrolletal13} we presented the results of two simulations flow-driven molecular cloud formation.
127The models were identical
128except for the physical conditions of the inflowing gas. One model (``Smooth") began with a completely uniform
129flow. The other model (``Clumpy") used the same mass inflow rate, but with the gas distributed in dense clumplets with a filling factor of $5$\%. In both cases, the thermal and dynamical fragmentation of the shocked collision region leads to turbulence, at
130a level of $10$ to $20$\% of the inflowing kinetic energy. This is consistent with earlier results. The Clumpy
131run showed a somewhat higher velocity dispersion initially, since the clumps are less easily stopped in a
132flow collision.(Fig 2) %(see \S\ref{morphologies}). (Fig 2)
133
134
135Due to the lower compression factor in the Clumpy model, less gas is being cooled to higher densities than in the
136Smooth run. Thus, the kinetic energy of the inflow is less efficiently dissipated. Together with a non-contiguous
137distribution of cold, sub-jeans, fragments, this leads to a suppression of local collapse for nearly $20$~Myr after
138the initial flow collision. At that point, sufficient mass has assembled to induce global collapse of the whole
139region, resulting in a ``star burst" (more appropriately, ``sink burst") at a high sink formation rate.
140In contrast, the Smooth run shows local collapse already after $10$~Myr, at less than half the sink formation
141rate of the Clumpy run. Due to the local nature of the thermal fragmentation, more fragmentation will occur
142with increasing resolution \citep{Hennebelle2007}, thus, the times quoted here are {\em upper} limits for the onset
143of local collapse. Nevertheless, structured flows can delay the onset of (substantial) local collapse.
144Global collapse is only clearly visible in the Clumpy run.
145
146The differences between Clumpy and Smooth inflows extend to the mixing efficiencies. Somewhat counter to
147a naive expectation, the Smooth initial conditions result in less well mixed material (and cores). This is
148primarily due to the Nonlinear Thin Shell Instability (NTSI) funneling material preferentially into the troughs located far into the opposing
149inflow. For the Clumpy run, the global collapse of the accumulated clumps erases all memory of the initial
150inflow direction.
151
152In \citep{Carrolletal13} we chose two extremes as our initial conditions. It is more likely that the inflows themselves
153will contain turbulent velocity and density structures that are coherent in space. Spatial coherence leads to
154stronger shocks in the collision region, and thus to more efficient energy dissipation. In that sense, our Clumpy
155run is overestimating the effect of structured inflows. In addition magnetic fields are likely to play a strong role in shaping the evolution of the flows. The challenge ahead is to use a wider array of initial conditions and include more physical processes (such as magnetic fields) to explore the evolution of colliding flows and compare predictions from these cases with observations
156
157
158%\begin{figure}[t!]
159%\begin{center}
160%\includegraphics[width=.99\columnwidth]{3DSimulation.png}%{3DSimulations of clumpy and smooth flows.png}
161%\includegraphics[bb=0 0 511 512]{3DSimulation.png}%{3DSimulations of clumpy and smooth flows.png}
162%\caption{This may not be the one. Jonathan need caption}
163%\label{}
164%\end{center}
165%\end{figure}
166
167\section{Research Objectives}
168
169{\color{red} the follow is from the old proposal Need to be updated !!!}
170We will focus on simulations and observational implications of colliding flows under a variety of conditions. This will include (1) colliding flows interacting with various degrees of shear and (2) colliding flows with different magnetic field strengths (3) colliding flows with different magnetic field orientations. Our computations will be carried out using AstroBEAR.
171
172\begin{figure}[t!]
173\includegraphics[width=\columnwidth]{AMR2.png}
174\caption{Slice showing log density from 3D Colliding Flows simulation performed with AstroBEAR. Also shown is the AMR mesh which contains 4 additional levels of refinement}
175\label{fig:collapse}
176\end{figure}
177
178\section{Computational Approach}
179{\color{green} Updated by Baowei 4/10/2014!!}
180AstroBEAR is an Adaptive Mesh Refinement (AMR), multi-physics code for astrophysics. AMR remains at the cutting edge of computational astrophysics. AMR simulations adaptively change resolution within a computational domain to ensure that the most important features of the dynamics are simulated with highest accuracy. By allowing quiescent regions to evolve with low resolution, AMR simulations achieve order of magnitude increases in computational speed. After a decade of development only a handful of AMR-MHD codes exist for astrophysics: (e.g. FLASH, ENZO RAMSES, ORION, CASTRO).
181
182The UR astrophysics group successfully constructed and tested AstroBEAR, a fully parallelized, multi-dimensional AMR MHD code. The success of this effort is evidenced both in the code's completion (Cunningham et al 2009) and the papers published using AstroBEAR as it was developed through its radiation-hydrodynamic and MHD versions (a partial list includes: Poludnenko et al 2004ab; Cunningham et al 2005; 2006ab, Hartigan et al 2007, Dennis et al 2008, Yirak 2009, 2010, Li et al 2012, Huarte-Espinosa et al 2012).
183
184The multi-physics capabilities of AstroBEAR have been significantly expanded by including solvers for elliptic and parabolic equations. Adapting the linear system solver HYPRE, we now routinely simulate systems in which self-gravity, heat conduction and magnetic resistivity are important. Radiation transfer in the diffusive limit is currently being added. In addition, AstroBEAR can treat gravitationally interacting point particles which accrete mass.
185
186\subsection{AstroBEAR Scaling}\label{subsec:scaling}
187AstroBEAR is designed for 2D and 3D adaptive mesh refinement (AMR) simulations which require algorithms that are highly parallelized and manage memory efficiently. AstroBEAR uses a hierarchical approach to parallelization suitable for multicore architectures in which large-scale patches of data are distributed to nodes using MPI and the work for an individual patch is distributed across the cores on a node using OpenMP directives. AstroBEAR also employs new techniques such as load balancing by threading the grid advances on each level with preference going to the finer level grids.
188
189Here we present strong scaling and weak scaling results for AstroBEAR. In Figure \ref{fig:scalingPlot}, we report scaling test results on Stampede at TACC. The resolution we used for these test are $40^{3} + 5$ level AMR which is same as the computation we are planning to do. The strong scaling test plot of the current code shows a slope $-0.83$ on up to 1024 cores and $-0.69$ on up to 4096 cores while the slope for perfect scaling is $-1$. This shows AstroBEAR has an excellent scaling on Stampede. For the weak testing results, we keep the work load on each core same (each has $64^3$ cells). So the weak scaling result measures the efficiency of data communication between the processors. Our test results show the communication efficiency drops only $15\%$ running on $1024$ cores and $27\%$ on $4096$ cores. More results and details about the performance of AstroBEAR can be found in the report of "Code Performance of AstroBEAR2.0".
190\begin{figure}[t!]
191\centering
192%\subfigure[Strong Scaling Test for AstroBEAR on Stampede]{
193\includegraphics[width=.5 \textwidth]{TSFstrongOnStampede.eps}
194%\label{fig:StrongTSF}
195%}
196%\subfigure[Weak Scaling Test for AstroBEAR on Stampede]{
197\includegraphics[width=.5 \textwidth]{CFweakOnStampede_f.eps}
198\label{fig:scalingPlot}
199%}
200%\captionof{figure}{(a) Strong scaling behavior of AstroBEAR with Triggered Star Formation module and $3$ levels of AMR on Stampede at TACC. Running time is plotted versus the number of cores in log scale. It has a slope $-0.84$ up to $1024$ cores which shows excellent scaling (slope $-1$ for "perfect scaling). The code performance drops when running on more cores (slope=$0.69$ up to $4096$ cores) of Stampede. (b) Weak scaling behavior of AstroBEAR with Colliding Flows module and fixed grid on Stampede. Runtime efficiency($1/t$ where $t$ is the running time) is plotted versus the number of cores in log scale. The work load on each core is kept same ( each has $64^3$ cells) for the weak scaling test. The communication efficiency of AstroBEAR drops only $15\%$ running on $1024$ cores and $27\%$ on $4096$ cores. }
201\caption{Top: Strong scaling behavior of AstroBEAR with Triggered Star Formation module and $3$ levels of AMR on Stampede at TACC. Running time is plotted versus the number of cores in log scale. It has a slope $-0.84$ up to $1024$ cores which shows excellent scaling (slope $-1$ for "perfect scaling). The code performance drops when running on more cores (slope=$0.69$ up to $4096$ cores) of Stampede. Bottom: Weak scaling behavior of AstroBEAR with Colliding Flows module and fixed grid on Stampede. Runtime efficiency($1/t$ where $t$ is the running time) is plotted versus the number of cores in log scale. The work load on each core is kept same ( each has $64^3$ cells) for the weak scaling test. The communication efficiency of AstroBEAR drops only $15\%$ running on $1024$ cores and $27\%$ on $4096$ cores. }
202
203\end{figure}
204
205\section{Resource Request}
206{\color{green} Tables and Texts are added by Baowei on 4/10. Text and Numbers need to be revisited!!! }
207
208For our colliding flows project, we propose to run 5 levels of AMR (instead of 4 levels as in our last proposal) and smaller base dimensions (comparing with our last proposal). This will increase the filling fractions and increase the effciency of using computing resources. Our estimate CPU-hours for this project is based on the strong scaling tests we've done which has the same resolution but running for only 0.1\% of one frame (we are planning to obtain 200 frames of the data). For our scaling test on 1024 cores, it takes about $144 seconds$ or $41 CPU-hour$ to run 0.1\% of one frame. So the total CPU-hours for one production run is about $820,000$ CPU-hours on Stampede. Since there's no I/Os when doing scaling test and it takes about $20~150$ seconds to save one frame of data (in format of chombo files, depending on the size of data). So we expect $\approx 1000$ CPU-hours to save the total 200 frames on 1024 cores for each run. So the total CPU-hours for each run is about 821,000 on Stampede (See Table(\ref{tab:SUs}).
209
210When doing stron scaling tests for our triggered star formation project, we start from a point where the stars are just formed to get a better estimate. And we use the same base dimensions but lower (3 or 2) AMR levels (due to the availability of starting point at high levels). It took about 361 seconds to get 20\% of one frame from the starting points on 1024 cores which is about 103 CPU hours. And we estmate the running time will be 8 times longer when running with 4 levels of AMR. The current data size for one frame of 3-level-AMR is about 3GB and we estimate for our production runs the data size of each frame is about 6-12 GB. Depending on this information we estimate the total CPU-hours we need for each run of our triggered star formation project is 413,000 on Stampede (See Table(\ref{tab:SUs}).
211
212
213In total we require 4.0 million CPU-hours on Stampede and 2.1 million CPU-hours on Gordon and 2.0 million CPU-hours on SuperMIC, about 98\% of which will be used for production runs and 2\% for testing runs and continue development of our code. We can use anywhere from 500-5000 cores for a typical production run, depending on queue limitations.
214
215{\color{red} The following 2 paragraphs are from the old proposal !!!}
216-------
217Using AstroBEAR, we have found very interesting new results related to star formation - in particular studies of feedback (Quillen et al 2005, Cunningham et al 2009, Carroll et al 2009, 2010) as well as molecular cloud formation (Carroll et al 2013). We plan to carry out a series of three-dimensional numerical simulations of colliding flows with different shear angles, magnetic field strengths, and magnetic field orientations to study the star-forming properties of a cluster. In particular this will include colliding flows with $\beta=\infty$ and shear angles $0^{\circ}$, $30^{\circ}$ and $60^{\circ}$ as well as colliding flows with $\beta=1$ and $\beta=10$ and shear angles of $0^{\circ}$, and $60^{\circ}$, as well as colliding flows with $\beta=1$, a shear angle of $0^{\circ}$, and magnetic field orientation angles of $0^{\circ}$ and $60^{\circ}$. See Table \ref{tab:flows} This will allow us to address the interplay between magnetic field strength, orientation, and shear of the colliding flows with respect to the resulting properties of the molecular clouds as well as the cores that form within them.
218AstroBEAR is presently performing well with multiple 6+ levels of AMR refinement. The tractability of a given run then becomes more a question of the number of needed cell updates, which is mainly determined by the fraction of the volume where mesh refinement is employed. And the volume filling fraction depends on the specific problem/simulation. The computing resources we require are based on our previous runs on XSEDE machines. For example, Our hydro calculation with $0^{\circ}$ shear and with resolution $128^3 + 4$ AMR took about 6,000 SU's for 1 frame. In Tables \ref{tab:flows} we summary the computing resources we require. In total we require 7.1 million CPU-hours, 99.3\% of which will be used for production runs and 0.7\% for testing runs and continue development of our code. We can use anywhere from 500-5000 cores for a typical production run, depending on queue limitations.
219-------
220
221{\color{green} The following section is added by Baowei on 04/10/14}
222\section{Introduction to Local Resources and Research Team}
223We have a variety of computational platforms available locally for our research. The Center of Integrated Research Computing(CIRC) at the University of Rochester hosts the Blue Gene/Q system called BlueStreak. BlueStreak consists of 1,024 nodes, 16,384 CPU cores, 16 TB of RAM, and 400 TB of storage. Each node consists of a 16-core A2 processor with 32 MB of cache and access to 16 GB of RAM. By reservation our group can have access to the whole rack(Normal reservations are for less than half of the rack, 24 hours per reservation). CIRC also hosts another Linux cluster called BlueHive which is expected to be in production in June of 2014. The new BlueHive system consists of approximately 200 nodes of IBM’s iDataPlex architecture with a high-speed, low-latency, InfiniBand interconnect. Each node houses 2 x 12-core Intel “Ivy Bridge” processors, with a range in memory from 64 GB to 256 GB. 30 of the 64 GB nodes have 2 dedicated NVIDIA K20X (“Kepler”) GPU (graphics processing unit) cards and 8 of the 64 GB nodes have 2 dedicated Intel Phi 5110P accelerator cards. Our group has access to 170 cores of the BlueHive system and 96 cores of them are owned by and dedicated to our group. Our group also has a dedicated 32 processor equivalent (PE) cluster built for developing and debugging AstroBEAR code.
224
225{\color{red} This part needs more details}
226Our research group which currently has one faculty member, two computational science staffs, two postDocs, four graduate students and two undergraduate students has a very strong backgroup of computaional astrophysics......
227
228
229\subsection{I/O Requirements, Analysis, and Storage}
230Based on Table (\ref{tab:SUs}), we expect to save 200 frames of data with size 5-20GB for each frame (the size of the each frame depends how many zones trigger the highest level of AMR) for each of the six runs of our colliding flows simulation with 5 levels of AMR. So the total data size for our colliding flows project is about 6 TB - 24 TB. We are planning to run these simulations on Stampede and we expect to need $\sim$24 TB of storage on Ranch of TACC. For each of of the four runs of our triggered-star-formation simulations, we expect to save 100 frams of data with size about 6GB, So the total data size for our triggered-star-formation project is about 2.4TB. These simulations will be done on Gordon. So we requestion $\sim$ 3 TB on Gordon. Most of the analysis will be done using parallelized tools built into the AstroBEAR package that can be performed at run time or in post processing.
231
232\subsection{Why choosing to run on these machines?}
233{\color{green} this paragraph is added by Baowei on 04/10/14. Need revisit !!!}
234We've been using Kraken and Stampede (before Ranger) for the past two very productive years. With Kraken decommissioned this April, we have to shift most our production runs to Stampede while seeking for opportunities on other machines. AstroBEAR behaves very good on Stampede (see the scaling tests attached). Since SuperMIC and Gordon have similar archetectures to Stampede, we expect the code will scale well on these two machines. For our Colliding flows project, we propose to run with higher level AMRs than before. So it will requires more memory usage. Gordon will be a good machine for these runs since it has big (64GB) RAM per node. Also we are planning to optimize the code on the Xeon Phi coprocessors of our local BlueHive system. So we expect the code will take the advantage of the MIC architecture and run even better on Stampede and SuperMIC.
235\subsection{Financial Support}
236{\color{red} waiting for updated information from Sondra. Will be updated by Baowei!!}
237Financial support for this project will come from the Space Telescope Sci Institute grant HST -AR-12128.01-A entitled ``STSci - Hubble Telescope - The Reel Deal: Interpreting HST Multi-Epoch Movies of YSO JetsSpace'' (PI, A. Frank; 10/1/2010 - 9/30/2013) and the Department of Energy grant DE-SC0001063 entitled ``The dynamics of magnetized Astrophysical Jets through Pulsed Power HEDP lab Studies'' (PI, A. Frank; 8/15/2012 - 8/14/2015) and the National Science Foundation, NSF AST-1109285 entitled ``From Central Engine to Bipolar Outflow: Binaries, MHD and the Evolution of Planetary Nebulae'' (PI, A. Frank; 9/1/2011 - 8/31/2014). DOE, Award no. R17081, entitled ``Rice - Clumpy Environments \& Interacting Shock Waves: Realistic Laboratory Analogs of Astrophysical Flows'', (PI: A. Frank, 2/22/2011 - 2/21/2014) and the Space Telescope Sci Institute grant HST-AR-12832.01-A entitled ``Hubble Telescope Cycle 20 - Climbing the Ladder of Start Formation Feedback``, (PI A. Frank, 11/1/2012 - 10/31/2015).
238
239\section{Summary}
240{\color{red} Text from old proposal, need to updated !!!}
241With the advent of our efficient 3D AMR MHD code AstroBEAR and state of the art HPC facilities, we are on the doorstep of significant breakthroughs in understanding the physics of the feedback process in the clustered star formation. With our previous start-up XSEDE allocation (TG-AST120029), we have already found and are in the process of publishing very important results. We are in full production mode with AstroBEAR, but the powerful numerical resources of the XSEDE that we are requesting are needed to make further progress on unraveling the mysteries of interacting binaries.
242
243If granted this allocation, we will perform a series of 3D Adaptive Mess Refinement parallel numerical simulations with higher resolution and longer time-scales than ever before to study the formation, structure and stability and observational implications of inter-acting stellar binaries.
244\begin{itemize}
245\item Study of colliding flows with MHD 4,930,000 SU's on Stampede.
246\item Study of triggered star formation with/without MHD 10,000,000 SUs on Stampede.
247\item Testing and continue development of our code. 7,000 SU's on Stampede; 0.7\% of total request.
248\end{itemize}
249
250To achieve these goals, we request SU's on the TACC Stampede.
251%\bibliographystyle{plain}
252%\bibliography{refs}
253{\color{red} bibliography need to be updated !!!}
254\begin{thebibliography}{}
255%
256\bibitem[Audit \& Hennebelle(2005)]{Audit2005} Audit, E., \& Hennebelle, P. \ 2005, \aap, 433, 1
257\bibitem[Bally, J. 2011a]{bally11a}
258Bally, J., 2011, Computational Star Formation, 270, 247
259\bibitem[Bally, J., Cunningham, N.J., Moeckel, N., et al. 2011]{bally11b}
260John Bally, Nathaniel J. Cunningham, Nickolas Moeckel, Michael G. Burton, Nathan Smith, Adam Frank, and Ake Nordlund, 2011, \apj, 727, 113
261\bibitem[Reipurth, B. and Bally, J.2001]{bally01}
262Bo Reipurth and John Bally, 2001, ARAA, 39, 403
263\bibitem[Bally, J., Licht, D.,Smith, N., & Walawender, J., 2006]{bally06}
264Bally, J., Licht, D.,Smith, N., \& Walawender, J., 2006, AJ, 131, 473
265\bibitem[Bally, J., Reipurth, B., and Davis, C., 2006]{bally06b}
266Bally, J., Reipurth, B., and Davis, C., 2006, Protostars and Planets V, University of Arizona Press, Tucson, in press
267\bibitem[Ballesteros-Paredes et al.(1999)]{Ballesteros-Paredes1999} Ballesteros-Paredes, J., Hartmann, L., \& V{\'a}zquez-Semadeni, E. \ 1999, \apj, 527, 285
268\bibitem[Ballesteros-Paredes \& Hartmann(2007)]{Ballesteros-ParedesHartmann2007} Ballesteros-Paredes, J., \& Hartmann. L. \ 2007, Revista Mexicana de Astronomia y Astrofisica, 43, 123
269\bibitem[Burkert \& Hartmann(2004)]{BurkertHartmann2004} Burkert, A., \& Hartmann, L. \ 2004, \apj, 616, 288
270%\bibitem[Ballesteros-Parades, Hartmann & Vazquez-Seandeni, 1999]{ballesteros99}
271% Ballesteros-Parades, Hartmann \& Vazquez-Seandeni, 1999, ApJ, 527, 285
272\bibitem[Carroll et al, 2010]{carroll10}
273Carroll, J.J., Frank, A., \& Blackman, E.G. 2010, \apj, 722, 145
274\bibitem[Carroll et al, 2013]{Carrolletal13}
275Carroll, J.J., Frank, A., \& Heitsch, F. 2013,"The effects of Inhomogeneities within Colliding Flows on the Formation and Evolution of Molecular Clouds", in preparation
276\bibitem[Carroll et al, 2009]{carroll09}
277Carroll, J.J., Frank, A., Blackman, E.G., Cunningham, A.J.,
278\& Quillen, A.C. 2009, \apj, 695, 1376
279\bibitem[Carroll et al, 2011]{carroll11}
280Carroll, J. Shroyer, B; Frank, A; Ding, C eprint arXiv:1112.1710. 2011, submitted to JCP
281\bibitem[Cunningham et al, 2009]{cunningham09}
282 Cunningham, A.J., Frank, A., Carroll, J., Blackman, E.G., \& Quillen, A.C. 2009, \apj, 692, 816
283\bibitem[Cunningham et al, 2009b]{cunningham09b}
284Cunningham, A. Frank, A., Varniere, P., Mitran, S., \& Jones, T.W. 2009, \apjs, 182, 51
285\bibitem[Elmegreen et al, 2000]{elmegreen00}
286Elmegreen B., G.,2000, \apj, 539, 342
287\bibitem[Evans et al.(2009)]{Evansetal2009} Evans, II, N.~J., Dunham, M.~M., J{\o}rgensen, J.~K. et al. \ 2009, \apjs, 181, 321
288 \bibitem[Hartmann et al.(2001)]{Hartmann2001} Hartmann, L., Ballesteros-Paredes, J., \& Bergin, E.~A. \ 2001, \apj, 562, 852
289 \bibitem[Hartmann \& Burkert(2007)]{HartmannBurkert2007} Hartmann, L., \& Burkert, A. \ 2007, \apj, 654, 988
290 \bibitem[Heitsch et al, 2011]{heitsch11}
291Heitsch, F., Naab, T., \& Walch, S. 2011, mnras, 415, 271
292 \bibitem[Heitsch et al.(2008a)]{Heitsch2008a} Heitsch, F., Hartmann, L.~W., Slyz, A.~D., Devriendt, J.~E.~G., \& Burkert, A. \ 2008, \apj, 674, 316
293 \bibitem[Heitsch et al.(2008b)]{Heitsch2008b} Heitsch, F., Hartmann, L.~W., \& Burkert, A. \ 2008, \apj, 683, 786
294 \bibitem[Heitsch et al.(2009)]{Heitsch2009} Heitsch, F., Ballesteros-Paredes, J., \& Hartmann, L.~W. \ 2009, \apj, 704, 1735
295 \bibitem[Hennebelle \& Audit(2007)]{Hennebelle2007} Hennebelle, P., \& Audit, E. \ 2007, \aap, 465, 431
296 \bibitem[Huarte-Espinosa \& Frank(2012a)]{tenerife}
297Huarte-Espinosa, M., \& Frank, A., 2012, arXiv:1201.4322
298 \bibitem[Hueckstaedt(2003)]{Hueckstaedt2003} Hueckstaedt, R.~M. \ 2003, New Astronomy, 8, 295
299 \bibitem[Klein et al, 2006]{klein06}
300Klein, R., Inutsuka, S., Padoan, P., and Tomisaka, T , 2006, Protostars and Planets V, University of Arizona Press, Tucson
301 \bibitem[Krumholtz et al, 2005]{krumholtz05}
302Krumholtz, M., McKee, K., \& Klein, R., 2005, Nature, 438, 333
303 \bibitem[Knee et al, 2000]{knee00}
304 Knee, L.B.G., \& Sandell, G., 2000, ,aap, 361, 671
305 \bibitem[Nakamura et al, 2011]{nakamura11}
306 Nakamura, F., \& Li, Z.-Y., 2011, \apj, 740, 36
307 \bibitem[Nakamura et al, 2007]{nakamura07}
308 Nakamura, F., \& Li, Z.-Y., 2007, \apj, 662, 395
309 \bibitem[Matzner et al, 2002]{matzner02}
310Matzner, C.D., 2002, ,\apj, 566, 302
311 \bibitem[McKee \& Ostriker(2007)]{McKeeOstriker} McKee, C.~F., \& Ostriker, E.~C.\ 2007, ARA\&A, 45, 565
312 \bibitem[Moriarty-Schieven et al, 2006]{moriarty06}
313 Moriarty-Schieven, G.H., Johnstone, D., Bally, J., \& Jenness, T. 2006, \apj, 645, 357
314 \bibitem[Norman et al, 1980]{norman80}
315 Norman, C., \& Silk, J., 1980, \apj, 238, 158
316 \bibitem[Pringle et al.(2001)]{Pringle2001} Pringle, J.~E., Allen, R.~J., \& Lubow, S.~H. \ 2001, MNRAS, 327, 663
317 %\bibitem[Reipurth et al, 2010]{reipurth10}
318% Reipurth, B., Mikkola,
319% S., Connelley, M., \& Valtonen, M. 2010, ApJl, 725, L56
320 \bibitem[Quillen et al, 2005]{quillen05}
321 Quillen, A.C., Thorndike, S.L.,Cunningham, A., Frank, A., Gutermuth, R.A., Blackman, E.G., Pipher, J.L., ,\& Ridge, N., 2005, ,\apj, 632, 941
322\bibitem[Smith et al, 2010a]{smith10a}
323Smith, N., Povich, M.S., Whitney, B.A., et al. 2010, mnras, 406, 952
324\bibitem[Smith et al, 2010b]{smith10b}
325Smith, N., Bally, J., \& Walborn, N.R. 2010, mnras, 405, 1153
326\bibitem[V{\'a}zquez-Semadeni et al.(1995)]{Vazquez-Semadeni1995} V{\'a}zquez-Semadeni, E., Passot, T., \& Pouquet, A. \ 1995, \apj, 441, 702
327\bibitem[V{\'a}zquez-Semadeni et al.(2006)]{Vazquez-Semadeni2006} V{\'a}zquez-Semadeni, E., Ryu, D., Passot, T., Gonz\'alez, R.~F., \& Gazol, A.\ 2006 ApJ 643, 245
328\bibitem[V{\'a}zquez-Semadeni et al.(2007)]{Vazquez-Semadeni2007} V{\'a}zquez-Semadeni, E., G{\'o}mez, G.~.C., Jappsen, A.~K., Ballesteros-Paredes, J., Gonz{\'a}lez, R.~F., \& Klessen, R.~S. \ 2007, \apj, 657, 870
329\bibitem[V{\'a}zquez-Semadeni et al.(2010)]{Vazquez-Semadeni2010} V{\'a}zquez-Semadeni, E., Col{\'{\i}}n, P., G{\'o}mez, G.~C., Ballesteros-Paredes, J., \& Watson, A.~W. \ 2010, \apj, 715, 1302
330\bibitem[Warin et al, 1996]{warin96}
331Warin, S., Castets, A., Langer, W. D., Wilson, R. W., \& Pagani, L. 1996, A\&A, 306, 935
332\bibitem[Wang et al, 2009]{wang09}
333Wang, P., Li, Z.-Y., Abel, T., \& Nakamura, F. 2010, \apj, 709, 27
334\end{thebibliography}
335
336\begin{appendix}
337%\ctable[caption = Expected CPU-hours for Colliding Flow Simulations,
338% label = tab:flows
339% ]
340% {cccccc}
341% {}
342% {
343% \FL
344% Shear& $\beta$ & Field & Resolution & Frames & Expected\\
345% Angle& & Orientation & & & SU's\\
346%\ML
347% $0^{\circ}$ & $\infty$ & -- & $128^{3}+4$ & 150 & Completed \\
348%\NN
349% $30^{\circ}$ & $\infty$ & -- & $128^{3}+4$ & 150 &900,000 \\
350%\NN
351% $60^{\circ}$ & $\infty$ & -- & $128^{3}+4$ & 150 & 900,000 \\
352%\NN
353% $0^{\circ}$ & $1$ & $0^{\circ} $ & $128^{3}+4$ & 150 & Completed \\
354%\NN
355% $60^{\circ}$ & $1$ & $0^{\circ} $ & $128^{3}+4$ & 150 &1,050,000 \\
356%\NN
357% $0^{\circ}$ & $1$ & $30^{\circ} $ & $128^{3}+4$ & 150 &1,050,000 \\
358%\NN
359% $0^{\circ}$ & $1$ & $60^{\circ} $ & $128^{3}+4$ & 150 &1,050,000 \\
360%\NN
361% $0^{\circ}$ & $10$ & $0^{\circ} $ & $128^{3}+4$ & 150 &1,050,000 \\
362%\NN
363% $60^{\circ}$ & $10$ & $0^{\circ} $ & $128^{3}+4$ & 150 &1,050,000 \\
364%\ML
365% Total & & & & & 7,050,000\\
366%\LL
367% }
368%
369\ctable[caption = Estimate CPU-hours needed for each run (based on 1024 cores of Stampede,
370 label = tab:SUs
371 ]
372 {ccc}
373 {}
374 {
375 \FL
376 Project & Colliding Flows & Tiggered Star Formation\\
377\ML
378 AMR levels for (Scaling VS Production) & 5 VS 5 & 3/2 VS 4 \\
379\NN
380 Num of frames (Scaling VS Production) & 0.1\% VS 200 &20\% VS 100 \\
381\NN
382 WallTime/CPU-hour for Strong Scaling(1024 cores) & 144 Secs/41 & 361 Secs/103 \\
383\NN
384 Data size of each frame & 5-20 GB & 3GB (6-12GB with 4 AMR) \\
385\NN
386 Time needed to Save one frame & ~ 20-150 Seconds & ~22-120 Seconds \\
387\NN
388 Estimate CPU-hours for computing & 820,000 & 412,000 \\
389\NN
390 Estimate CPU-hours for I/Os &1,000 & ~800 \\
391\ML
392 Total Estimate CPU-hours & ~821,000 & ~413,000 \\
393\LL
394 }
395
396\ctable[caption = Expected CPU-hours for Continuing Colliding Flows 3D Simulations,
397 label = tab:CF
398 ]
399 {cccc}
400 {}
401 {
402 \FL
403 Shearing Angle($\theta$) & Resolution & Magnetic Field ($\beta$) & Expected SUs on Stampede\\
404\ML
405 0 & $ 40^{3} $ + 5 AMR & 1 & 821,000\\
406\NN
407 15 & $ 40^{3} $ + 5 AMR & 1 & 821,000\\
408\NN
409 30 & $ 40^{3} $ + 5 AMR & 1 & 821,000\\
410\NN
411 0 & $ 40^{3} $ + 5 AMR & 1 & 821,000\\
412\NN
413 15 & $ 40^{3} $ + 5 AMR & 1 & 821,000\\
414\NN
415 20 & $ 40^{3} $ + 5 AMR & 1 & 821,000\\
416\NN
417 30 & $ 40^{3} $ + 5 AMR & 1 &821,000\\
418\ML
419Total & & 4,926,000
420\LL
421 }
422
423\ctable[caption = Expected CPU-hours for 3D Triggered Star Formation Simulations,
424 label = tab:TSF
425 ]
426 {cccc}
427 {}
428 {
429 \FL
430 Hydro/MHD & Resolution & Rotaion & Expected SUs on Stampede\\
431\ML
432 Hydro & $320\times912\times192 $ + 4 AMR & Y & 413,000\\
433\NN
434 Hydro & $320\times912\times192 $ + 4 AMR & Y & 413,000\\
435\NN
436 MHD $\beta=1$,Toroidal & $320\times912\times192 $ + 4 AMR & Y & 413,000\\
437\NN
438 MHD $\beta=1$,Poloidal & $320\times912\times192 $ + 4 AMR & Y & 413,000\\
439\ML
440Total & & 1,652,000
441\LL
442 }
443\end{appendix}
444\end{document}