ScienceOpen:
research and publishing network
For Publishers
Discovery
Metadata
Peer review
Hosting
Publishing
For Researchers
Join
Publish
Review
Collect
My ScienceOpen
Sign in
Register
Dashboard
Blog
About
Search
Advanced search
My ScienceOpen
Sign in
Register
Dashboard
Search
Search
Advanced search
For Publishers
Discovery
Metadata
Peer review
Hosting
Publishing
For Researchers
Join
Publish
Review
Collect
Blog
About
56
views
0
references
Top references
cited by
223
Cite as...
0 reviews
Review
0
comments
Comment
0
recommends
+1
Recommend
0
collections
Add to
0
shares
Share
Twitter
Sina Weibo
Facebook
Email
3,386
similar
All similar
Record
: found
Abstract
: not found
Book Chapter
: not found
Machine Learning: ECML 2006
Bandit Based Monte-Carlo Planning
other
Author(s):
Levente Kocsis
,
Csaba Szepesvári
Publication date
(Print):
2006
Publisher:
Springer Berlin Heidelberg
Read this book at
Publisher
Further versions
open (via free pdf)
Powered by
Buy book
Review
Review book
Invite someone to review
Bookmark
Cite as...
There is no author summary for this book yet. Authors can add summaries to their books on ScienceOpen to make them more accessible to a non-specialist audience.
Related collections
Value-based Healthcare
Author and book information
Book Chapter
Publication date (Print):
2006
Pages
: 282-293
DOI:
10.1007/11871842_29
SO-VID:
8aa32846-812d-4bde-9438-78ca92cd684b
History
Data availability:
Comments
Comment on this book
Sign in to comment
Book chapters
pp. 270
Fast Variational Inference for Gaussian Process Models Through KL-Correction
pp. 679
B-Matching for Spectral Clustering
pp. 801
Dynamic Integration with Random Forests
pp. 282
Bandit Based Monte-Carlo Planning
pp. 318
Efficient Convolution Kernels for Dependency and Constituent Syntactic Trees
pp. 533
To Select or To Weigh: A Comparative Study of Model Selection and Model Weighing for SPODE Ensembles
pp. 646
Reinforcement Learning for MDPs with Constraints
pp. 654
Efficient Non-linear Control Through Neuroevolution
Similar content
3,386
Stochastic Gradient Succeeds for Bandits
Authors:
Jincheng Mei
,
Zixin Zhong
,
Bo Dai
…
Multi-fidelity Gaussian process bandit optimisation
Authors:
K Kandasamy
,
G. Dasarathy
,
J Oliva
…
Variance-Aware Regret Bounds for Stochastic Contextual Dueling Bandits
Authors:
Qiwei Di
,
Tao Jin
,
Yue Wu
…
See all similar
Cited by
217
A Survey of Monte Carlo Tree Search Methods
Authors:
Cameron B. Browne
,
Edward Powley
,
Daniel Whitehouse
…
Mastering Atari, Go, chess and shogi by planning with a learned model
Authors:
Julian Schrittwieser
,
Ioannis Antonoglou
,
Thomas Hubert
…
PROGRESSIVE STRATEGIES FOR MONTE-CARLO TREE SEARCH
Authors:
JOS UITERWIJK
,
Bruno Bouzy
,
H. Jaap van den Herik
…
See all cited by