The aim of this tutorial is to survey methods for smooth, high-dimensional function approximation that have become popular in Uncertainty Quantification over the last 10 years. It will be divided into three parts. In the first part, I will provide motivations and a short overview of polynomial approximation theory in high dimensions, including best s-term approximation rates for analytic functions. Next, I will develop techniques for computing polynomial approximations, including (adaptive) least squares and compressed sensing. Finally, in the third part, I will discuss recent research in this area based on approximations using neural networks. I will introduce several contemporary results in neural network approximation theory, but also highlight the key gap between such “existence theorems” and practical function approximation with deep learning. Finally, I will conclude with open problems and future directions.