Aarhus University Seal / Aarhus Universitets segl

Strategy iteration is strongly polynomial for 2-player turn-based stochastic games with a constant discount factor

Research output: Contribution to book/anthology/report/proceedingArticle in proceedingsResearchpeer-review

  • Thomas Dueholm Hansen, Denmark
  • Peter Bro Miltersen
  • Uri Zwick, School of Computer Science, Tel Aviv University, Israel
Ye showed recently that the simplex method with Dantzig pivoting rule, as well as Howard's policy iteration algorithm, solve discounted Markov decision processes (MDPs), with a constant discount factor, in strongly polynomial time. More precisely, Ye showed that both algorithms terminate after at most О iterations, where n is the number of states, m is the total number of actions in the MDP, and 0 < γ < 1 is the discount factor. We improve Ye's analysis in two respects. First, we improve the bound given by Ye and show that Howard's policy iteration algorithm actually terminates after at most О iterations. Second, and more importantly, we show that the same bound applies to the number of iterations performed by the strategy iteration (or strategy improvement) algorithm, a generalization of Howard's policy iteration algorithm used for solving 2-player turn-based stochastic games with discounted zero-sum rewards. This provides the first strongly polynomial algorithm for solving these games, resolving a long standing open problem.
Original languageEnglish
Title of host publicationProceedings of the Second Symposium on Innovations in Computer Science
Number of pages11
PublisherTsinghua University Press, Beijing
Publication year2011
Pages253-263
ISBN (print)978-7-302-24517-9
Publication statusPublished - 2011
EventInnovations in Computer Science. ICS 2011 - Beijing, China
Duration: 6 Jan 20119 Jan 2011

Conference

ConferenceInnovations in Computer Science. ICS 2011
LandChina
ByBeijing
Periode06/01/201109/01/2011

See relations at Aarhus University Citationformats

ID: 41615987