Optimization of Multi-Armed Bandit Problems (Q168994): Difference between revisions

From geokb
(‎Created a new Item: Added new OpenAlex topic claimed by USGS staff from API)
 
(‎Changed label, description and/or aliases in en, and other parts: removed aliases from OpenAlex topic and added addresses subject claims to linkable items)
 
(2 intermediate revisions by the same user not shown)
aliases / en / 0aliases / en / 0
Bandit Optimization
aliases / en / 1aliases / en / 1
Bayesian Optimization
aliases / en / 2aliases / en / 2
Contextual Bandits
aliases / en / 3aliases / en / 3
Online Learning
aliases / en / 4aliases / en / 4
Convex Optimization
aliases / en / 5aliases / en / 5
Thompson Sampling
aliases / en / 6aliases / en / 6
Regret Analysis
aliases / en / 7aliases / en / 7
Gaussian Process Optimization
aliases / en / 8aliases / en / 8
Hyperparameter Optimization
aliases / en / 9aliases / en / 9
Adversarial Multi-Armed Bandits
description / endescription / en
This cluster of papers focuses on the optimization of multi-armed bandit problems, including topics such as Bayesian optimization, contextual bandits, online learning, convex optimization, Thompson sampling, regret analysis, Gaussian process optimiza
Improving decision-making through adaptive optimization in uncertain situations.
Property / same as
 
Property / same as: https://openalex.org/T12101 / rank
Normal rank
 
Property / OpenAlex ID
 
Property / OpenAlex ID: T12101 / rank
 
Normal rank

Latest revision as of 20:58, 21 September 2024

Improving decision-making through adaptive optimization in uncertain situations.
Language Label Description Also known as
English
Optimization of Multi-Armed Bandit Problems
Improving decision-making through adaptive optimization in uncertain situations.

    Statements