Comparing Bayesian optimization with traditional sampling

Описание к видео Comparing Bayesian optimization with traditional sampling

Welcome to video #2 of the Adaptive Experimentation series, presented by graduate student Sterling Baird @sterling-baird at the 18th IEEE Conference on eScience in Salt Lake City, UT (Oct 10-14, 2022). In this video Sterling introduces Bayesian Optimization as an alternative method for sampling data. Bayesian Optimization is a powerful tool that can be used to optimize the performance of machine learning algorithms, and in this video, Sterling compares its performance in design of experiment tasks to traditional sampling methods such as grid, random, and pseudo-random sampling. He also discusses the expected improvement acquisition function and benchmarks for evaluating the performance of these methods. In this installment of the Adaptive Experimentation series, Sterling tests the performance of these methods on both low-dimensional and high-dimensional data. Stay tuned for the next installment in this series for more insights on optimizing the performance of adaptive experimentation.

Github link to jupyter notebook https://github.com/sparks-baird/self-...


previous video in series:    • Traditional sampling techniques (grid...  
next video in series:    • Closed-loop optimization of inexpensi...  

0:00 traditional design of experiments, review of video 1
3:48 adaptive experimentation definition
6:00 Bayesian optimization and expected improvement acquisition function
8:17 optimization benchmarks
8:50 objective functions (2D Branin function)
10:52 Meta's Adaptive Experimentation (Ax) platform & Loop API
14:20 Bayesian optimization performance
18:33 comparison of searching efficiency (adaptive vs traditional optimization) including visualization
21:17 comparing performance in higher dimensions (Hartmann6 function)
24:15 summary of optimization results

Комментарии

Информация по комментариям в разработке