Max Morris | Iowa State University | United States
Humankind has been performing experiments of one kind or another since before the span of our recorded history. The relatively modern discipline of statistics led to a formalization of some of the basic concepts of experimentation, including issues of how experiments might most usefully be planned, or designed. Because statistics is focused on separating and comparing what, by different definitions, is understood to be the signal and noise components in data, this sets the stage for how statistical experimental design has been developed.
In the early days of statistical experimental design, much of the focus of noise characterization was on the uncontrollable influence of experimental material, or experimental units. The nuisance variation associated with other sources of uncertainty such as measurement error led to a view of statistical analysis that was more model-based, which in turn led to the development of concepts and advances in optimal experimental design. More recently, statistical methods for analysis of the output of deterministic computer models have been developed along with associated design methodology. In this context, random processes are used to represent the uncertainty associated with model evaluations, or discrepancies between model and reality, that cannot be realized. The intent of this overview is to trace how these different sources and views of noise and uncertainty have led to experimental design for an evolving spectrum of applications ... from field trials to finite elements.