The fast development of machine learning (ML) has resulted in explosive advancement in many aspects of science and engineering. In particular, deep learning methods allow one to understand highly complex systems that are almost impossible to study by the traditional methods. Inevitably, uncertainty and noise are ubiquitous in such complex systems. And the design and application of ML methods need to take into account of these uncertainties. On the other hand, modern ML methods also provide new perspectives and algorithm design possibilities for uncertainty quantification. It is the interface of ML and UQ that this mini-symposium focuses on. Our aim is to bring together a group of experts in ML and UQ and to foster knowledge exchange. Our primary focus is on two front: (1) how uncertainty can be quantified in modern ML methods; and (2) how modern ML methodology can help UQ to further our understanding and advance algorithm design.
14:00
Long-Time Forecasting Methods Leveraging Machine Learning
Henning Lange | University of Washington | United States
Show details
Authors:
J. Nathan Kutz | University of Washington, Seattle | United States
Henning Lange | University of Washington | United States
We develop a spectral method for long-term forecasting of signals stemming from linear as well as non-linear ergodic dynamical systems. The algorithm performs a spectral decomposition in a non-linear and data-dependent basis. We derive expressions for the loss as a function of the parameters that model temporal dependencies in the frequency domain which allows us to compute globally optimal parameters in a scalable and efficient manner by making use of the Fast Fourier Transform. The efficacy of the algorithms is evaluated in the context of predicting signals in the realms of power systems, meteorology and turbulent flow.
14:30
Surrogate Modeling and Uncertainty Quantification of Dynamical PDE Systems with Bayesian Physics-Constrained Deep Neural Networks
Nicholas Geneva | University of Notre Dame | United States
Show details
Authors:
Nicholas Geneva | University of Notre Dame | United States
Nicholas Zabaras | University of Notre Dame | United States
We present a Bayesian auto-regressive convolutional neural network for modeling and uncertainty quantification of non-linear dynamical systems without training data. In recent years, deep learning has proven to be a viable methodology for modeling physical systems. However, in their traditional form, such models can require a large amount of training data which may be extremely expensive to obtain or not available at all. To overcome this shortcoming, we employ physics-constrained learning which provides a promising training methodology as it only utilizes the governing equations. A Bayesian framework is imple- mented allowing for uncertainty quantification of the predicted quantities of interest. We rigorously test this model on several non-linear transient partial differential equation systems to demonstrate is applicability to various physical problems.
15:00
Deep generative priors for quantifying uncertainty
Assad Oberai | University of Southern California | United States
Show details
Author:
Assad Oberai | University of Southern California | United States
Generative adversarial networks (GANs) have demonstrated a remarkable ability to learn the underlying distribution of a complex field from a collection of its samples. In doing so, these networks also reduce the dimension of the field by generating samples from a vector space of smaller dimension. In this talk we will explore the use of GANs as priors in a Bayesian inference problem. When viewed from this perspective, they provide the ability to efficiently generate sample-based priors and to quantify uncertainty in a inference problem. We will discuss the utility of this approach and provide examples that are motivated physics-based inference problems.
15:30
Learning Moment Equations using Measurement Data
Weize Mao | Ohio State University | United States
Show details
Authors:
Weize Mao | Ohio State University | United States
Dongbin Xiu | Ohio State University | United States
We present a numerical approach to learning the unknown governing equations for stochastic system. The method uses deep neural networks and incorporates memory effect. It relies on measurement data to provide an effective "closure" model for the evolution of moments.