ERC SUBLRN Machine Learning and Optimization Lab

  • Research
  • Publications
  • People
  • Visitors
  • Directions & Contact
Homepage

Almost Optimal Exploration in Multi-Armed Bandits

Zohar Karnin, Tomer Koren, Oren Somekh (2013). Almost Optimal Exploration in Multi-Armed Bandits. ICML 2013

Author Elad HazanPosted on 16/06/201312/01/2014Categories Uncategorized

Post navigation

Previous Previous post: Online Learning for Time Series Prediction
Next Next post: Distributed Exploration in Multi-Armed Bandits

Site Search

Recent Publications

  • Distributed Exploration in Multi-Armed Bandits
  • Almost Optimal Exploration in Multi-Armed Bandits
  • Online Learning for Time Series Prediction
  • Playing Non-linear Games with Linear Oracles
  • Better Rates for Any Adversarial Deterministic MDPs
  • Research
  • Publications
  • People
  • Visitors
  • Directions & Contact
ERC SUBLRN Machine Learning and Optimization Lab Proudly powered by WordPress