Convergence properties of policy iteration

Manuel S. Santos, John Rust

Research output: Contribution to journalArticlepeer-review

41 Scopus citations


This paper analyzes asymptotic convergence properties of policy iteration in a class of stationary, infinite-horizon Markovian decision problems that arise in optimal growth theory. These problems have continuous state and control variables and must therefore be discretized in order to compute an approximate solution. The discretization may render inapplicable known convergence results for policy iteration such as those of Puterman and Brumelle [Math. Oper. Res., 4 (1979), pp. 60-69]. Under certain regularity conditions, we prove that for piecewise linear interpolation, policy iteration converges quadratically. Also, under more general conditions we establish that convergence is superlinear. We show how the constants involved in these convergence orders depend on the grid size of the discretization. These theoretical results are illustrated with numerical experiments that compare the performance of policy iteration and the method of successive approximations.

Original languageEnglish (US)
Pages (from-to)2094-2115
Number of pages22
JournalSIAM Journal on Control and Optimization
Issue number6
StatePublished - Dec 22 2004
Externally publishedYes


  • Complexity
  • Computational cost
  • Method of successive approximations
  • Policy iteration
  • Quadratic and superlinear convergence

ASJC Scopus subject areas

  • Control and Optimization
  • Applied Mathematics


Dive into the research topics of 'Convergence properties of policy iteration'. Together they form a unique fingerprint.

Cite this