The main topics of the mini-symposium include model uncertainty, robust uncertainty quantification & optimization, and their implications in predictive modeling guarantees and rare-event analysis. We aim at bringing together closely related but possibly disparate communities in applied mathematics, applied probability, information theory, operations research, optimization and economics, to foster interdisciplinary discussions and collaborations. Speakers will demonstrate recent mathematical and conceptual developments of related UQ methods, and also their applications ranging from engineering design of materials to econometrics and risk analysis.
08:30
Input vs. Output Modelling: Towards Measuring the Accuracy of Models
Bernd Heidergott | Vrijie Universiteit Amsterdam | Netherlands
Show details
Author:
Bernd Heidergott | Vrijie Universiteit Amsterdam | Netherlands
Motivated by model risk considerations, we develop a statistical procedure that allows for capturing model misspecification through model-parameters. Our approach combines MLE estimators for model parameters obtained via input data with those obtained via output data. A key element is the development of Monte-Carlo-based estimators for the density and its derivatives for the output process, using knowledge only on the dynamics of the model. We take the view that when an MLE is performed on the output, misspecification in the postulated model will imply a discrepancy between the MLE-fit via the input and the MLE-fit via the output. This idea of ``best fitting'' at the output level is similar to the training in machine learning algorithms, which in recent years have been developed to find reliable representations of observed (output) data by statistical (econometric) models. In our talk, we will advocate the use of causal models rather than statistical ones.
09:00
Information-Theoretic Approaches to Distributional Robustness
Jeremiah Birrell | University of Massachusetts Amherst | United States
Show details
Author:
Jeremiah Birrell | University of Massachusetts Amherst | United States
Probabilistic models themselves carry uncertainty, due to imperfect knowledge of the system description and/or dynamics. I will discuss recent progress in developing information-theoretic tools that are adapted to various distributional-robustness questions and quantities-of-interest, including rare events and stochastic processes. These results will be illustrated by applications to queuing models and diffusion processes.
09:30
- CANCELED - Ambiguity aversion in Incomplete Information Games: Auctions and Oligopolistic competition
Sung-Ha Hwang | Korea Advanced Institute of Science and Technology | Korea, Republic of
Show details
Author:
Sung-Ha Hwang | Korea Advanced Institute of Science and Technology | Korea, Republic of
We study a model of incomplete information games under ambiguity, where each player is averse to ambiguity about the other player's private information. We follow the maximin expected utility approach in which a decision maker tries to maximize the worst-case expected utility. We examine the implications of ambiguity aversion in strategic interactions such as auctions and oligopolistic competition models. We find that ambiguity aversion leads to overbidding in the first-price auction. However, underbidding at low valuations and overbidding at high valuations in the all-pay auction occur in the presence of ambiguity aversion. We also find that ambiguity aversion enhances efficiency in more competitive environments as in Bertrand models, while deteriorating efficiency in less competitive models of Cournot competition.
10:00
Computationally Efficient Quantification of Simulation Input Uncertainty
Henry Lam | Columbia University | United States
Show details
Author:
Henry Lam | Columbia University | United States
When running Monte Carlo or stochastic estimation, input uncertainty arises when the input models that generate the random variates are subject to fitting or other sources of errors. Conventional methods to quantify this input uncertainty, based on variance estimation or confidence interval construction, often entail substantial simulation costs due to the nested sampling requirements of the quantification schemes. We present several related approaches to reduce these costs, utilizing optimization, subsampling, and random perturbation respectively. We explain the statistical mechanisms of these approaches and why they need significantly less simulation efforts than some previous methods. We also compare them in terms of the ease of implementation and empirical performances.