You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
This is nice to see, from pulling similar ideas from Frazier's work and it's interesting to see a similar acquisition function to Chimera being applied in a hierarchical composite optimization. Thanks also for putting this on GitHub and making it easier to follow 🥲
I find it nice to see that the auto-differentiability is being explicitly called out as a strength here.
Moreover, its implementation is not auto-differentiable, limiting its usefulness as a composite objective for BO.
This excerpt resonates with me. I find it stated clearly in a way that captures the concern I often have when I see someone label simple black-box scalarization an optimization technique as multi-objective (which, as mentioned, is straightforward but has drawbacks).
In practice, such scalar scores are often used in a "black-box" manner (Fig. 1c (left)), where each observation’s multiple objective values are first concatenated into a single score, and standard single-objective BO is then employed to optimize this score over the search space. 1,20 While straightforward, this approach has two main drawbacks: (a) if input-based objectives are included, their known dependence on input parameters must be "re-learned" by the surrogate model, likely reducing optimization efficiency; (b) the scalar score itself is artificial and may not carry physical meaning, which can hinder the design of effective priors. 21,22 To address these issues, Frazier and co-workers introduced the concept of composite objectives, 23 which apply a scalar score only after building surrogate 2 models. Notably, calculating such composite objectives requires operating on (multiple) model posterior distributions, complicating practical implementation.
This is nice to see, from pulling similar ideas from Frazier's work and it's interesting to see a similar acquisition function to Chimera being applied in a hierarchical composite optimization. Thanks also for putting this on GitHub and making it easier to follow 🥲
I find it nice to see that the auto-differentiability is being explicitly called out as a strength here.
This excerpt resonates with me. I find it stated clearly in a way that captures the concern I often have when I see someone label simple black-box scalarization an optimization technique as multi-objective (which, as mentioned, is straightforward but has drawbacks).
The text was updated successfully, but these errors were encountered: