Peter Binev (University of South Carolina), Slides
          "Recent developments of approximation theory and greedy algorithms"
Greedy approximation is a form of nonlinear approximation that finds the approximant by a sequential procedure in which at each step the improvement is determined by optimizing a certain quantity. While this approach often does not guarantee optimal convergence rates, it proves to be very useful in problems in which finding the best approximation is difficult or computationally impossible. In this talk we shall consider the basic theoretical setup and results in greedy approximation and give two examples of related problems: the near-best tree approximation and the application of the greedy strategy in reduced basis method.

Wojciech Czaja (University of Maryland), Slides
          "Operator analaysis of geometric data structures"
As complexity and size of our data sets increase, there is a growing need for stable, adaptive, and data-dependent techniques to analyze and extract features from these complex structures. We shall discuss a suite of approaches enabled by harmonic analysis, to answer this issue. These include Laplacian and Schroedinger Eigenmaps, Locally Linear Embedding, Diffusion Maps and Wavelets, among others. The common denominator for these techniques is the approach to treat the point cloud as a graph, and to study certain naturally arising operators on such data graphs. We shall discuss the methods to construct the data graphs and the arising operators, and how to use them for problems such as classification or detection. We shall also address the computational complexity issues and methods to resolve them.

Yanbei Chen (Caltech), Slides
          "Semi-analytical modeling of compact binary mergers"

Don Estep (Colorado State University), Slides
          "Uncertainty quantification: Errors, variations, and inverses"
In many realistic engineering and scientific situations, practical limitations on experiments and physical understanding make it necessary to make use of all available information that can be gleaned from both experimental observations and mathematical modeling. This is one reason that there has been such a rapid increase of interest in research combining statistical and mathematical techniques to investigate and predict the behavior of models derived from physical principles, which has come to be called uncertainty quantification. In this talk, we describe some of the interesting aspects of uncertainty quantification, including formulation of the inverse problem for parameter determination, the interaction of simulation error and statistical error and error estimation for statistical quantities, and developing algorithms for distributing computational effort efficiently for statistical analysis.

Jan Hesthaven (Brown University), Slides
          "Approximations and reduced models for problems with many parameters"
Models of reduced computational complexity is used extensively throughout science and engineering to enable the fast/real-time modeling of complex systems for control, design, uncertainty quantification etc. While of undisputed value these reduced models are, however, often heuristic in nature and the accuracy of the output is often unknown, hence limiting the predictive value. We discuss ongoing efforts to develop reduced methods endowed with a rigorous a posteriori theory to certify the accuracy of the model. The focus will be on reduced models for parameterized linear partial differential equations. We outline the basic ideas behind certified reduced basis methods, discuss an offline-online approach to ensure computational efficiency, and emphasize how the error estimator can be exploited to construct an efficient basis at minimal computational off-line cost. A key feature of this approach is a greedy approach underlying the basis construction. We shall discuss how this can be used to construct simulation based data bases and devote special attention to the challenges of high-dimensional problems, including sampling strategies and parameter reduction techniques. This is work done in collaboration with Y. Chen (UMass Dartmouth), B. Stamm (Paris VI), S. Zhang (Brown), and Y. Maday (Paris VI).

David Knezevic (Harvard), Slides
          "A component-based reduced basis method for many-parameter systems"
We begin with an overview of the Reduced Basis framework, including discussion of Reduced Basis methods in the context of frequentistic uncertainty quantification. We then present new methodology that combines standard Reduced Basis approximations with a static condensation formulation, which allows standalone parametrized Reduced Basis Components to be developed. The Reduced Basis Components can be connected to assemble a wide variety of many-parameter systems, which leads to a significant increase in modeling flexibility compared to conventional Reduced Basis approximation. We demonstrate the methodology with numerical results drawn from applications in thermal analysis, structural analysis and acoustics.

Akil Narayan (University of Massachusetts Dartmouth), Slides
          "Interpolation in high dimensions and non-intrusive reduced order modeling"
We discuss interpolatory strategies for the approximation of parameterized functions, a problem that frequently arises in computational and experimental models of interest. Interpolatory methods typically only weakly leverage the topology on parameter space, and are termed "non-intrusive" because their implementation simply requires Monte-Carlo-like interrogation of experimental outcomes or legacy code. We focus on reviewing two major interpolatory strategies: stochastic collocation in the context of uncertainty quantification, and the Empirical Interpolation Method and its variants. The typical desiderata for a reduced order modeling technique are significant reduction in computational effort, and accuracy on par with the full model. We discuss how interpolatory methods frequently achieve these requirements, and are usually easier to implement than competing methods.

Justin Romberg (Georgia Tech), Slides
        "Using structure to solve underdetermined systems of linear equations and overdetermined systems of quadratic equations"
We will start by giving a high-level overview of the fundamental results in the field that has come to be known as compressive sensing. The central theme of this body of work is that underdetermined systems of linear equations can be meaningfully "inverted" if they have structured solutions. Two examples of structure would be if the unknown entity is a vector which is sparse (has only a few "active" entries) or if it is a matrix which is low rank. We discuss some of the applications of this theory in signal processing and statistics. Finally, we will show how we can leverage some of the results to solve systems of quadratic equations.

Gianluigi Rozza (SISSA MathLab, Italy), Slides
          "Reduced order modelling in multi-parametrized system: applications to inverse problems and optimal control"
We review the application of reduced basis method in the framework of computational reduction strategies for the approximation of partial differential equations modelling systems characterized by physical and geometrical parameters, by focusing both on computational performances and stability, accuracy and reliability of results. We introduce proper geometrical parametrization settings provided by the use of small deformations on geometrical control points such as free-form-deformation techniques, to allow also a geometrical complexity reduction in terms of number of parameters, used to represent the computational domains. Numerical results will show some examples of optimal control and inverse problems related with possible applications in the mathematical modeling and numerical simulation of the human cardiovascular system modelled by viscous flows.

Mark Scheel (Caltech), Slides
          "Numerical relativity: Triumphs and challenges of binary black hole simulations"
Binary systems of compact objects (black holes and neutron stars) are important sources of gravitational radiation that should be seen by gravitational wave detectors such as LIGO. In the last several years it has become possible to accurately solve Einstein's equations on a computer for a binary black hole system: we can follow many binary orbits, study the collision and merger of the two black holes into a remnant black hole, and compute in detail the resulting gravitational waveform that would be detected on Earth. We discuss how these computations are being used to explore the seven-dimensional space of initial parameters (spin of each object plus the mass ratio) and to assist gravitational-wave data analysis. We also discuss key challenges, such as producing a waveform that covers the entire frequency range of the detectors, reducing the large computational expense, and pushing to extreme values of the initial parameters.

Michele Vallisneri (Jet Propulsion Laboratory/Caltech), Slides
          "Markov Chain Monte Carlo: The ultimate multitool"
Markov-Chain Monte-Carlo (MCMC) integration has been rightly named as one of the ten most important algorithms of the 20th century. Its attractiveness stems from the fact that the same simple algorithm can be applied to integration over spaces of any dimension, with the same promise of performance and convergence. For physicists, an additional attraction is the prospect of using a physical analogy (statistical ensembles) to solve a mathematical problem. In many fields of astronomy, MCMC is used to tackle inference problems in Bayesian statistics that are untreatable analytically. This includes searches and parameter estimation in gravitational-wave astronomy. I briefly review the origins and the basics of MCMC and related techniques, discuss some of their applications in general relativity, and speculate on future developments and research directions.

Alan Weinstein (Caltech), Slides
          "Computational challenges in gravitational-wave data analysis"
Several advanced gravitational wave detectors are coming online in the next few years. With these, we expect to detect gravitational waves, study the properties of the waves and their astrophysical sources, and open the new field of gravitational wave astrophysics. But it won't be easy, because the signals are very weak, varied, and buried in non-Gaussian, non-stationary instrumental noise. There are well-established techniques for optimal separation of signals from Gaussian noise, using matched filtering. These techniques, however, require accurate waveform templates, especially regarding their frequency / phase evolution over the entire duration of the signal, coherently in a network of detectors around the world. For the case of compact binary mergers (involving black holes and/or neutron stars), these waveforms are currently very computationally expensive to produce. Further, the waveform templates are governed by many parameters (for compact binary mergers, these include the masses and spins of the component compact objects); and the parameter space for such waveforms in the advanced detector band (10 - 2000 Hz) is very large, with many degeneracies. The ranges of parameter values favored by nature is largely unknown. The detection of these weak signals and the extraction of their parameters is made far more complex due to the non-Gaussian and non-stationary nature of the instrumental noise; as a result, assessing the significance of a weak signal is subtle and problematic. We will review some of these issues.

ROM-GR Resources

ROM-GR Website
Scientific Program
Speakers and Participants
Online Registration Form
Practical Information

Caltech Resources

California Institute of Technology
Cahill Center for Astronomy and Astrophysics
TAPIR (Caltech Theoretical Astrophysics Including Relativity)
Directions to Caltech
Caltech Area Map


URL: www.tapir.caltech.edu/~romgr/
Email: romgr@tapir.caltech.edu
Manuel Tiglio
Jan Hesthaven
Scott Field
Chad Galley