Markov-achievable payoffs for finite-horizon decision models

Victor Pestien, Xiaobo Wang

Research output: Contribution to journalArticlepeer-review


Consider the class of n-stage decision models with state space S, action space A, and payoff function g : (S × A)n × S → ℝ. The function g is Markov-achievable if for any possible set of available randomized actions and all transition laws, each plan has a corresponding Markov plan whose value is at least as good. A condition on g, called the "non-forking linear sections property", is necessary and sufficient for g to be Markov achievable. If g satisfies the slightly stronger "general linear sections property", then g can be written as a sum of products of certain simple neighboring-stage payoffs.

Original languageEnglish (US)
Pages (from-to)101-118
Number of pages18
JournalStochastic Processes and their Applications
Issue number1
StatePublished - Jan 15 1998


  • Markov decision model
  • Markov plan
  • Payoff function

ASJC Scopus subject areas

  • Statistics and Probability
  • Modeling and Simulation
  • Applied Mathematics


Dive into the research topics of 'Markov-achievable payoffs for finite-horizon decision models'. Together they form a unique fingerprint.

Cite this