Quantifying mismatch in Bayesian optimization

Our paper trying to quantify mismatch in Bayesian optimization just got accepted to this year’s Bayesian Optimization workshop at NIPS. In this paper, we have tried to assess how encoding wrong prior smoothness assumptions about the underlying target function affects different acquisition functions in Bayesian optimization and found that mismatch can be a severe problem for the optimization routine, that this problem gets even worse in higher dimensions, and that it remains even if hyper-parameters are optimized. Therefore, we have written up a short cautionary note about how thinking hard about prior assumptions can sometimes be more important than choosing one particular acquisition function over another.


Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s