08:30
Physics-informed Inference: Dimensional Analysis as Dimension Reduction
Zachary del Rosario | Stanford University | United States
Show details
Authors:
Zachary del Rosario | Stanford University | United States
Gianluca Iaccarino | Stanford University | United States
Dimension reduction is crucial for rendering high-dimensional UQ problems tractable. But in applying UQ to physical problems we aim not just for tractability, but for physically interpretable results. Recent efforts demonstrate an intimate connection between subspace dimension reduction (e.g. active subspaces and sufficient dimension reduction) and dimensional analysis: A way to physically interpret subspace inference. In this talk, we will show:
1. The intimate link between dimensionless numbers (e.g. the Reynolds number) and subspace dimension reduction
2. The unique capability of this dimensional analysis perspective to frame a hypothesis test for the presence of unknown unknowns, so-called lurking variables
3. An application of dimensional analysis-based subspace inference to large-scale turbulence simulations. Combined with so-called inspectional analysis, this allows attribution of observed phenomena to individual terms in the governing equations, leveraging the power of physically-interpretable, data-informed subspace inference.
08:50
Embedded Ridge Approximations: Constructing Ridge Approximations Over Localized Scalar Fields For Improved Simulation-centric Dimension Reduction
Chun Yui Wong | University of Cambridge | United Kingdom
Show details
Authors:
Chun Yui Wong | University of Cambridge | United Kingdom
Pranay Seshadri | University of Cambridge/The Alan Turing Institute | United Kingdom
Geoffrey Parks | University of Cambridge | United Kingdom
Mark Girolami | University of Cambridge/The Alan Turing Institute | United Kingdom
Parametric studies on quantities of interest (qois) in engineering often involve a large set of inputs and expensive simulations, posing challenges to design tasks such as optimization and uncertainty quantification. Output-based dimension reduction strategies aim at discovering dimension-reducing subspaces within the input domain that sufficiently summarize the functional variation of these qois. In ridge approximations, we leverage these strategies to form low-cost emulators.
While current approaches to ridge approximations focus on the variation of qois with respect to inputs, we propose an alternative paradigm called embedded ridge approximations that focuses on approximating an underlying scalar field, from which qois can be derived via integration. An example of this is the pressure field in a computational fluid dynamics simulation of airfoils, from which qois such as drag can be derived. In this talk, we introduce the key physical property intrinsic to many scalar fields favoring our method over conventional dimension reduction strategies, a notion we call localized scalar-field influence. In addition, we describe algorithms for computing and storing an embedded ridge approximation selectively, paying attention to the refinement of this approximation to adhere to physical principles such as conservation laws, as well as the recovery of the full scalar field and qois. We illustrate this through numerical examples including the shape design of a NACA0012 airfoil.
09:10
- CANCELED - Uncertainty Quantification of High-dimensional Input and Output Spaces
Laura White | NASA Langley Research Center | United States
Show details
Authors:
Laura White | NASA Langley Research Center | United States
Thomas West | NASA Langley Research Center | United States
Computational models of complex engineering systems may contain a significant amount of uncertainty. Sampling-based approaches, such as Monte Carlo, are often used to propagate this uncertainty through simulations, but due to the computational cost, sampling-based methods may be prohibitive. As an alternative, surrogate-based techniques are widely used to mitigate the number of expensive calculations by creating an often simple mathematical representation of a model output as a function of uncertain inputs. Surrogate-based approaches, however, suffer from a curse of dimensionality meaning that the number of samples from the computational model grows exponentially with the number of input dimensions. Additionally, multidimensional outputs, such as the distribution of the flowfield across a body, is particularly challenging for many surrogate-based methods. To overcome both obstacles, this work proposes using active subspaces to reduce the uncertain input dimensionality of a surrogate model based on proper orthogonal decomposition, which is used to handle multidimensional outputs. The computational savings are investigated and compared to current techniques. This work has been applied to assess the impact of uncertainty on a computational fluid dynamics model of a complex aerospace system.
09:30
Reduced-Dimension Bayesian Learning Machines for Discovering Dynamical Ocean Model Functions
Abhinav Gupta | Massachusetts Institute of Technology (MIT) | United States
Show details
Authors:
Abhinav Gupta | Massachusetts Institute of Technology (MIT) | United States
Pierre F.J Lermusiaux | Massachusetts Institute of Technology (MIT) | United States
We utilize and extend our rigorous PDE-based Bayesian learning framework for simultaneous learning of state variables, parameters, parameterizations, constitutive relations, and differential equations of high-dimensional dynamical models. The Bayesian learning machines can discriminate among existing models and now also extrapolate into the space of models to discover newer ones. The extended framework combines our Gaussian Mixture Model (GMM) - dynamically orthogonal (DO) filter for nonlinear reduced-dimension Bayesian inference with novel schemes from approximation theory and statistical learning theory for discovering new terms and functional forms in model equations. We also develop theory and methodology for handling stochastic boundary conditions, and for performing data-driven subspace augmentation using machine learning methods to represent the missing uncertainty in our reduced-dimension Bayesian inference. Results are showcased for varied coupled fish-biogeochemical-physical ocean dynamics.
09:50
Principal component analysis and boosted optimal weighted least-squares method for learning tree tensor networks
Cécile Haberstich | CEA/DAM | France
Show details
Authors:
Cécile Haberstich | CEA/DAM | France
Anthony Nouy | Ecole Centrale de Nantes | France
Guillaume Perrin | CEA/DAM | France
One of the most challenging tasks in computational science is the approximation of high-dimensional functions. Most of the time, only a few information on the functions is available, and approximating high-dimensional functions requires exploiting low-dimensional structures of these functions. We propose a strategy to construct approximations of high-dimensional functions in tree-based tensor format (tree tensor networks whose graphs are dimension partition trees). It relies on an extension of principal component analysis (PCA) to multivariate functions [2]. In practice, PCA is realized on least-squares projections of partial evaluations of the function. A boosted optimal least-squares method, derived from [1], is presented for the projection of partial evaluations on subspaces, which allows to guarantee the stability of the projection with a number of evaluations close to the dimension of the subspace. The accuracy of the approximation strongly depends on the tree structure. We propose a leaf-to-root strategy which constructs a tree adapted to the underlying structure of the function in order to minimize the number of evaluations necessary to reach a certain precision.
[1] A. Cohen and G. Migliorati. Optimal weighted least-squares methods. SMAI Journal
of Computational Mathematics, 3 :181 - 203, 2017.
[2] A. Nouy. Higher-order principal component analysis for the approximation of
tensors in tree-based low rank formats. Numerische Mathematik, 141(3) :743 - 789, 2019.
10:10
Hierarchical tensor methods for high-dimensional nonlinear PDEs
Alec Dektor | UC Santa Cruz | United States
Show details
Authors:
Alec Dektor | UC Santa Cruz | United States
Daniele Venturi | UC Santa Cruz | United States
In this talk we present a new method to compute the numerical solution of high-dimensional nonlinear PDEs on low-rank tensor manifolds. The key idea relies on a hierarchical decomposition of the solution space in terms of a sequence of nested subspaces of smaller dimension. This process, which can be conveniently be visualized in terms of binary trees, yields series expansions that include classical Tensor-Train and Hierarchical Tucker representations. By enforcing dynamic orthogonality conditions at each level of the binary tree representing the solution tensor, we obtain coupled evolution equations for the tensor modes spanning each subspace. This allows us to compute the numerical solution of high-dimensional time-dependent PDEs on tensor manifolds with constant rank, with no need for computationally expensive rank reduction methods. New algorithms for dynamic addition and removal of modes and numerical examples involving high-dimensional hyperbolic and parabolic PDEs will be presented.