Abstract
Ye showed recently that the simplex method with Dantzig pivoting rule, as well as Howard's policy iteration algorithm, solve discounted Markov decision processes (MDPs), with a constant discount factor, in strongly polynomial time. More precisely, Ye showed that both algorithms terminate after at most О iterations, where n is the number of states, m is the total number of actions in the MDP, and 0 < γ < 1 is the discount factor. We improve Ye's analysis in two respects. First, we improve the bound given by Ye and show that Howard's policy iteration algorithm actually terminates after at most О iterations. Second, and more importantly, we show that the same bound applies to the number of iterations performed by the strategy iteration (or strategy improvement) algorithm, a generalization of Howard's policy iteration algorithm used for solving 2-player turn-based stochastic games with discounted zero-sum rewards. This provides the first strongly polynomial algorithm for solving these games, resolving a long standing open problem.
Original language | English |
---|---|
Title of host publication | Proceedings of the Second Symposium on Innovations in Computer Science |
Number of pages | 11 |
Publisher | Tsinghua University Press, Beijing |
Publication date | 2011 |
Pages | 253-263 |
ISBN (Print) | 978-7-302-24517-9 |
Publication status | Published - 2011 |
Event | Innovations in Computer Science. ICS 2011 - Beijing, China Duration: 6 Jan 2011 → 9 Jan 2011 |
Conference
Conference | Innovations in Computer Science. ICS 2011 |
---|---|
Country/Territory | China |
City | Beijing |
Period | 06/01/2011 → 09/01/2011 |