Extreme events are short-lived episodes occurring due to exogenous causes or internal instabilities during which observables significantly depart from their mean values. A great deal of effort has been devoted to predicting and statistically quantifying extreme events because they can have catastrophic consequences (e.g., structural failure, rogue waves, extreme weather conditions, and market crashes). This is an arduous task because the systems that give rise to extreme events are most often highly complex and strongly nonlinear. This mini-symposium provides a venue to review the latest advances in the field.
14:00
Optimal Sampling with Gaussian Process Regression
Antoine Blanchard | Massachusetts Institute of Technology | United States
Show details
Authors:
Antoine Blanchard | Massachusetts Institute of Technology | United States
Themistoklis P. Sapsis | Massachusetts Institute of Technology | United States
We propose an optimal-sampling algorithm specifically designed for quantification of extreme events in complex, high-dimensional systems. From a small initial set of samples (no more than a dozen), the algorithm iteratively determines the "next-best points” in the parameter space that most improve our estimate of the heavy-tailed p.d.f. of some scalar function, while minimizing the number of function evaluations. The proposed method is superior to existing approaches in that a) the algorithm accounts for the relationship between inputs and output values, b) each "next-best point” is determined by minimizing a criterion whose evaluation (and that of its gradient) is extremely fast, and c) the use of Gaussian Process Regression allows for the possibility of the input space being high-dimensional.
14:30
A Conditional Space-Time POD Formalism for the Statistical Description of Intermittent and Rare Events
Oliver Schmidt | University of California, San Diego | United States
Show details
Authors:
Oliver Schmidt | University of California, San Diego | United States
Peter Schmid | Imperial College | United Kingdom
We present a conditional space-time proper orthogonal decomposition (POD) formulation that is tailored to the eduction of rare or intermittent events from an ensemble of realizations of a fluid process. By construction, the resulting spatio-temporal modes are coherent in space and over a pre-defined finite time horizon and optimally capture the variance, or energy of the ensemble. For the example of intermittent acoustic radiation from a turbulent jet, we introduce a conditional expectation operator that focuses on the loudest events, as measured by a pressure probe in the far-field and contained in the tail of the pressure signal's probability distribution. Applied to high-fidelity simulation data, the method identifies a statistically significant "prototype", or average acoustic burst event that is tracked over time. Most notably, the burst event can be traced back to its precursor, which opens up the possibility of prediction of an imminent burst. We furthermore investigate the mechanism underlying the prototypical burst event using linear stability theory and find that its structure and evolution is accurately predicted by optimal transient growth theory.
15:00
Recurrent Neural Networks and Reservoir Computing for Spatio-Temporal Forecasting of Chaotic Dynamics: A Comparative Study
Pantelis Vlachas | ETH Zürich | Switzerland
Show details
Authors:
Pantelis Vlachas | ETH Zürich | Switzerland
Petros Koumoutsakos | ETH Zürich | Switzerland
Reservoir computing (RC) has been successful in forecasting the state evolution of high-dimensional dynamical systems for several Lyapunov time units. Recurrent Neural Networks (RNNs) with long short-term memory (LSTM) have been effective in modeling temporal dependencies and forecasting reduced-order (observable) space dynamics. What are the advantages and drawbacks of these two popular algorithms? We answer this question through a systematic study and tests on several benchmark problems. We discuss implementation aspects and highlight advantages and limitations of each model. Our benchmark results suggest that when the full state dynamics is available for training, RC outperforms RNNs in terms of predictive performance and capturing of the long-term statistics. However, for reduced-order data, large RC models often diverge, as the maximum reservoir size (imposed by memory limitations) is too small to capture the dynamics. In contrast RNNs capture well the dynamics of these reduced-order models albeit at the cost of higher training times. We also extend a parallel architecture leveraging local system interactions, previously proposed for RC, to RNNs to expand their forecasting capabilities. This study confirms that RNNs and RC are state of the art methodologies for effective forecasting of complex spatio-temporal dynamics.
Note: This is joint work with Jaideep Pathak, Edward Ott, Brian Hunt (University of Maryland) and Themis Sapsis (MIT).