In this minisymposium, we explore the symbiotic relationship between computational statistics and computational dynamics. The interaction of the two fields have long been established. Efficiently computing statistics of dynamical quantities is of interest in science and engineering, and cleverly constructed dynamical systems are used to sample from high-dimensional probability distributions. We will highlight recent advances in numerical methods that utilize tools in one field to solve problems in the other in a novel fashion. We will exhibit new algorithms for chaotic sensitivity analysis, rare event simulation, stochastic optimal control, and data assimilation.
16:30
Gibbs posterior convergence and the thermodynamic formalism
Sayan Mukherjee | Duke University | United States
Show details
Author:
Sayan Mukherjee | Duke University | United States
We consider a Bayesian framework for making inferences about dynamical
systems from ergodic observations.The proposed Bayesian procedure is
based on the Gibbs posterior, a decision theoretic generalization of
standard Bayesian inference. We place a prior over a model class
consisting of a parametrized family of Gibbs measures on a mixing
shift of finite type. This model class generalizes (hidden) Markov
chain models by allowing for long range dependencies, including Markov
chains of arbitrarily large orders. We characterize the asymptotic
behavior of the Gibbs posterior distribution on the parameter space as
the number of observations tends to infinity. In particular, we define
a limiting variational problem over the space of joinings of the model
system with the observed system, and we show that the Gibbs posterior
distributions concentrate around the solution set of this variational
problem.In the case of properly specified models our convergence
results may be used to establish posterior consistency. This work
establishes tight connections between Gibbs posterior inference and
the thermodynamic formalism, which may inspire new proof techniques in
the study of Bayesian posterior consistency for dependent processes.
Time permitting a discussion of large deviation principles,
specifically the Laplace principle, as general tool for proving
posterior consistency will be provided.
17:00
Data-driven approximation of the Koopman generator: Model reduction, system identification, and control
Stefan Klus | Freie Universitat Berlin | Germany
Show details
Authors:
Stefan Klus | Freie Universitat Berlin | Germany
Feliks Nuske | Rice University | United States
Sebastian Peitz | Paderborn University | Germany
Jan-Hendrik Niemann | Zuse Institute Berlin | Germany
Cecilia Clementi | Rice University | United States
Christof Schutte | Freie Universitat Berlin | Germany
We present a data-driven method for the approximation of the Koopman generator. It can be used for computing eigenvalues, eigenfunctions, and modes of the generator and also for system identification. In addition to learning the governing equations of deterministic systems, which then reduces to SINDy (sparse identification of nonlinear dynamics), it is possible to identify the drift and diffusion terms of stochastic differential equations from data. Moreover, it enables us to derive coarse-grained models of high-dimensional systems and to determine efficient model predictive control strategies.
17:30
Efficient sampling methods for stochastic dynamical systems using Koopman eigenfunctions
Ben Zhang | Massachusetts Institute of Technology | United States
Show details
Authors:
Ben Zhang | Massachusetts Institute of Technology | United States
Tuhin Sahai | United Technologies Research Center | United States
Youssef Marzouk | Massachusetts Institute of Technology | United States
We propose a general approach to constructing efficient sampling methods for stochastic differential equations using eigenfunctions of the Koopman operator. Importance sampling for SDEs are typically done through a Girsanov transformation, in which one alters the drift term in the SDE so that the resulting importance sampling estimator has lower variance. It is well-known that the Doob transform and solutions to the Kolmogorov backward equation (KBE) can determine how one can change the drift so that the resulting estimator has zero variance. The stochastic Koopman operator of the dynamical system share the same eigenfunctions as the KBE. This allows us to approximate the Doob transform using Koopman eigenfunctions.
For nonlinear systems, we use dynamic mode decomposition methodology to identify the eigenfunctions. For linear systems, we show how the eigenfunctions can be found exactly even for high dimensional state space. By combining these concepts, we devise efficient sampling methods that can estimate rare event probabilities efficiently. Lastly, we explore how the eigenfunctions can be used to sample unnormalized probability distributions.
18:00
The role of lag time in spectral estimation for Markov processes
Robert Webber | New York University | United States
Show details
Authors:
Robert Webber | New York University | United States
Jonathan Weare | New York University | United States
With modern computational resources, it is increasingly
common to perform spectral estimation for Markov processes using large
sets of simulated data. Researchers are rapidly developing algorithms
for estimating eigenvalues and eigenvectors of the transition operator
and applying these algorithms to produce significant scientific
discoveries. While there has been some progress toward a mathematical
understanding of these algorithms, there remain questions of optimal
design and error analysis. Our work examines a leading algorithm for
data-driven spectral estimation, called the variational approach for
conformation dynamics (VAC for short). The VAC algorithm approximates
eigenvalues and eigenvectors using time-lagged autocorrelations
evaluated at a given lag time. Here, we examine the importance of the
lag time, a key design choice that has been under-appreciated in past
analyses. First, we explain why VAC's systematic error decreases at
long lag times. Second, we show how the choice of lag time can
heighten VAC's sensitivity to sampling error. Building off our
mathematical results, we encourage VAC users to plot VAC eigenvalues
as a function of lag time and select a lag time that maximizes the gap
between relevant eigenvalues. Through numerical examples, we show that
this lag time selection procedure can lead to substantial improvement
over informal lag time selection strategies that have been proposed in
the past.