The challenge of acquiring the most valuable data from experiments—for the purpose of inference, prediction, design, or control—has received substantial attention in statistics, applied mathematics, and engineering and science. This task can be formalized through the framework of optimal experimental design (OED). Models describing experimental conditions and processes, both physical and statistical, can be particularly useful for arriving at these optimal designs. However, model-based OED faces many challenges, such as formulational difficulties, choices of optimality criteria, computation of information metrics, handling nonlinear responses and non-Gaussian distributions, and dealing with expensive and dynamically evolving simulations. This minisymposium invites researchers of model-based optimal experimental design, in the broad areas of computational and applications-oriented developments.
08:30
Optimal Bayesian Design of Sequential Experiments Using Reinforcement Learning with Policy Gradient Method
Xun Huan | University of Michigan | United States
Show details
Authors:
Wanggang Shen | University of Michigan | United States
Xun Huan | University of Michigan | United States
Experiments are indispensable for learning and developing models in engineering and science. When experiments are expensive, a careful design of these limited data-acquisition opportunities can be immensely beneficial. Optimal experimental design, while leveraging the predictive capabilities of a simulation model, provides a rigorous framework to systematically quantify and maximize the value of experiments. We focus on designing a finite sequence of experiments, seeking fully optimal design policies (strategies) that can (a) adapt to newly collected data during the sequence (i.e. feedback) and (b) anticipate future changes (i.e. lookahead). We cast this sequential decision-making problem in a Bayesian setting with information-based utilities, and solve it numerically via policy gradient methods from reinforcement learning. In particular, we directly parameterize the policies and value functions by neural networks—thus adopting an actor-critic approach—and improve them using gradient estimates produced from simulated design and observation sequences. The overall method is demonstrated on an algebraic benchmark and a sensor movement application for source inversion. The results provide intuitive insights on the benefits of feedback and lookahead, and indicate substantial computational advantages compared to previous numerical approaches based on approximate dynamical programming.
09:00
Bayesian design of experiments for an alternative model
Antony Overstall | University of Southampton | United Kingdom
Show details
Author:
Antony Overstall | University of Southampton | United Kingdom
Decision-theoretic Bayesian design of experiments is considered when the statistical model used to perform the analysis is different to the model used to design the experiment. Closed form results and large sample approximations are derived for the special case of normal linear models and for general cases, respectively. These are compared to the case when the fitted model and the design model are identical.
09:30
- CANCELED - Active Learning in Computational Catalysis using Stacked Gaussian Processes
Kareem Abdelfatah | University of South Carolina | United States
Show details
Authors:
Kareem Abdelfatah | University of South Carolina | United States
Wenqiang Yang | University of South Carolina | United States
Andreas Heyden | University of South Carolina | United States
Gabriel Terejanu | University of North Carolina at Charlotte | United States
Computational catalyst screening typically involves developing a microkinetic reaction model whose parameters are determined from transition-state theory, where the (free) energies of all adsorbates and transition-states can in principle be determined from density functional theory (DFT) calculations. The computational effort is extremely large when the goal is screening of tens or hundreds of possible active site structures. A particular burden is the computation of transition state energies which is computationally about one order of magnitude more time intensive than the computation of ground states given that a reaction network generally consist of many more reactions (i.e, transition states) than surface intermediates (i.e, ground states). In this work we use an active learning approach to enhance the predictions of a stacked Gaussian process with new DFT calculations for transition states energies identified using a rate-controlling step mechanism to accurately predict the overall turnover frequency.
10:00
Optimal Bayesian design for models with intractable likelihoods via supervised learning methods
Markus Hainy | Johannes Kepler University | Austria
Show details
Authors:
Markus Hainy | Johannes Kepler University | Austria
David Price | University of Melbourne | Australia
Christopher Drovandi | Queensland University of Technology | Australia
Optimal Bayesian experimental design is often computationally intensive due to the need to approximate many posterior distributions for datasets simulated from the prior predictive distribution. The issues are compounded further when the statistical models of interest do not possess tractable likelihood functions and only simulation is feasible. We employ supervised learning methods to facilitate the computation of utility values in optimal Bayesian design. This approach requires considerably fewer simulations from the candidate models than previous approaches using approximate Bayesian computation. The approach is particularly useful in the presence of models with intractable likelihoods but can also provide computational advantages when the likelihoods are manageable. We consider the two experimental goals of model discrimination and parameter estimation. The methods are applied to find optimal designs for models in epidemiology and cell biology.